Required ports for JMS using HornetQ (JBoss) to expose on docker container

I’m using Docker to link JMS server container to another JMS client container. But when I run the server in the docker container, the client can not connect to the server correctly. I exposed port 443 on docker (Is there any other port which JMS uses?)

I can successfully create destication, but not the JMS context:

  • bundler installed gems not persisting in fig/docker
  • How to get forever to print child process output to stdout.
  • Running a subset of docker commands without sudo
  • Why isn't my server restarting / code updating using Docker + Nodejs?
  • Avoid repeated program installation/configuration in container
  • Docker Private Registry - push to 'insecure-registry' still complains about 'unknown authority'
  • String PROVIDER_URL = "https-remoting://MYDOMAIN:443";
    ...
    
    /** PASSED **/
    Destination destination = (Destination) namingContext.lookup(destinationString);
    
    /** HAS ERROR **/
    JMSContext context = connectionFactory.createContext(username, password)
    

    Here is the error:

    java.nio.channels.UnresolvedAddressException
        at sun.nio.ch.Net.checkAddress(Net.java:123)
        at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:621)
        at io.netty.channel.socket.nio.NioSocketChannel.doConnect(NioSocketChannel.java:176)
        at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.connect(AbstractNioChannel.java:169)
        at io.netty.channel.DefaultChannelPipeline$HeadHandler.connect(DefaultChannelPipeline.java:1008)
        at io.netty.channel.DefaultChannelHandlerContext.invokeConnect(DefaultChannelHandlerContext.java:495)
        at io.netty.channel.DefaultChannelHandlerContext.connect(DefaultChannelHandlerContext.java:480)
        at io.netty.channel.ChannelOutboundHandlerAdapter.connect(ChannelOutboundHandlerAdapter.java:47)
        at io.netty.channel.CombinedChannelDuplexHandler.connect(CombinedChannelDuplexHandler.java:168)
        at io.netty.channel.DefaultChannelHandlerContext.invokeConnect(DefaultChannelHandlerContext.java:495)
        at io.netty.channel.DefaultChannelHandlerContext.connect(DefaultChannelHandlerContext.java:480)
        at io.netty.channel.ChannelDuplexHandler.connect(ChannelDuplexHandler.java:50)
        at io.netty.channel.DefaultChannelHandlerContext.invokeConnect(DefaultChannelHandlerContext.java:495)
        at io.netty.channel.DefaultChannelHandlerContext.connect(DefaultChannelHandlerContext.java:480)
        at io.netty.channel.DefaultChannelHandlerContext.connect(DefaultChannelHandlerContext.java:465)
        at io.netty.channel.DefaultChannelPipeline.connect(DefaultChannelPipeline.java:847)
        at io.netty.channel.AbstractChannel.connect(AbstractChannel.java:199)
        at io.netty.bootstrap.Bootstrap$2.run(Bootstrap.java:165)
        at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:354)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:353)
        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:101)
        at java.lang.Thread.run(Thread.java:745)
    
    Exception in thread "main" javax.jms.JMSRuntimeException: Failed to create session factory
        at org.hornetq.jms.client.JmsExceptionUtils.convertToRuntimeException(JmsExceptionUtils.java:98)
        at org.hornetq.jms.client.HornetQConnectionFactory.createContext(HornetQConnectionFactory.java:149)
        at org.hornetq.jms.client.HornetQConnectionFactory.createContext(HornetQConnectionFactory.java:130)
        at com.wpic.uptime.Client.main(Client.java:100)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:483)
        at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)
    Caused by: javax.jms.JMSException: Failed to create session factory
        at org.hornetq.jms.client.HornetQConnectionFactory.createConnectionInternal(HornetQConnectionFactory.java:673)
        at org.hornetq.jms.client.HornetQConnectionFactory.createContext(HornetQConnectionFactory.java:140)
        ... 7 more
    Caused by: HornetQNotConnectedException[errorType=NOT_CONNECTED message=HQ119007: Cannot connect to server(s). Tried with all available servers.]
        at org.hornetq.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:905)
        at org.hornetq.jms.client.HornetQConnectionFactory.createConnectionInternal(HornetQConnectionFactory.java:669)
        ... 8 more
    

  • Connecting to docker container using “container ip”
  • Docker container cannot clone public Github repo
  • Connecting to Meteor DDP from Java with Docker container
  • Docker newbie setup troubles
  • How to fix docker so that it stops messing with android studio from connecting to emulator in OSX
  • How to get contents generated by a docker container on the local fileystem
  • One Solution collect form web for “Required ports for JMS using HornetQ (JBoss) to expose on docker container”

    I just found the solution to this problem. I was also going through it.

    In your case the problem is in the JBoss configuration. In my case the problem was in Wildfly 8.2.

    You are probably using the following parameter in your JBoss:
    jboss.bind.address = 0.0.0.0

    I am using this parameter in my wildfly for him to accept external connections from any IP because my Wildfly is exposed on the Internet.

    The problem is that if you do not specify the JBoss/Wildfly settings which IP that HornetQ should report to the JMS clients that are doing remote loockup HornetQ will assume that the IP is what is set in jboss.bind.address. In this case it will take that 0.0.0.0 is not a valid IP. You probably see the following message in its log JBoss:

    INFO [org.hornetq.jms.server] (ServerService Thread Pool — 53)
    HQ121005: Invalid “host” value “0.0.0.0” detected for “http-connector”
    connector. Switching to “hostname.your.server”. If this new address is
    incorrect please manually configure the connector to use the proper
    one.

    In this case HornetQ will use the host defined in the machine name. On linux for example it will use what is defined in /etc/hostname.

    There is another problem. Because usually the hostname is not a valid host on the Internet can be resolved to an IP via a DNS service.

    Then notice what is probably happening to you: Your JBoss server is scheduled to give bind to 0.0.0.0, your HornetQ (embedded in JBoss) is trying to take this IP and how it is not a valid IP he is taking the hostname of your server. When your remote JMS client (that is outside of your local network) makes a loockup on your JBoss the HornetQ reports to the client that he must seek the HornetQ resources on the host YOUR_HOSTNAME_LOCAL_SERVER but when it tries to resolve this name through DNS he can not then the following failure occurs:

    java.nio.channels.UnresolvedAddressException
    at sun.nio.ch.Net.checkAddress(Net.java:123)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:621)
    at io.netty.channel.socket.nio.NioSocketChannel.doConnect(NioSocketChannel.java:176)
    at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.connect(AbstractNioChannel.java:169)
    at io.netty.channel.DefaultChannelPipeline$HeadHandler.connect(DefaultChannelPipeline.java:1008)
    at io.netty.channel.DefaultChannelHandlerContext.invokeConnect(DefaultChannelHandlerContext.java:495)
    at io.netty.channel.DefaultChannelHandlerContext.connect(DefaultChannelHandlerContext.java:480)
    at io.netty.channel.ChannelOutboundHandlerAdapter.connect(ChannelOutboundHandlerAdapter.java:47)
    at io.netty.channel.CombinedChannelDuplexHandler.connect(CombinedChannelDuplexHandler.java:168)
    at io.netty.channel.DefaultChannelHandlerContext.invokeConnect(DefaultChannelHandlerContext.java:495)
    at io.netty.channel.DefaultChannelHandlerContext.connect(DefaultChannelHandlerContext.java:480)
    at io.netty.channel.ChannelDuplexHandler.connect(ChannelDuplexHandler.java:50)
    at io.netty.channel.DefaultChannelHandlerContext.invokeConnect(DefaultChannelHandlerContext.java:495)
    at io.netty.channel.DefaultChannelHandlerContext.connect(DefaultChannelHandlerContext.java:480)
    at io.netty.channel.DefaultChannelHandlerContext.connect(DefaultChannelHandlerContext.java:465)
    at io.netty.channel.DefaultChannelPipeline.connect(DefaultChannelPipeline.java:847)
    at io.netty.channel.AbstractChannel.connect(AbstractChannel.java:199)
    at io.netty.bootstrap.Bootstrap$2.run(Bootstrap.java:165)
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:354)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:353)
    at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:101)
    at java.lang.Thread.run(Thread.java:745)

    The solution to the problem is to configure the JBoss which host it should inform for customers who are doing loockup remote.

    In my case the setting for the wildfly is as follows. The standalone.xml file must be changed:

    <subsystem xmlns="urn:jboss:domain:messaging:2.0">    
       <hornetq-server>
          <security-enabled>true</security-enabled>
          <journal-file-size>102400</journal-file-size>
    
          <connectors>
             <http-connector name="http-connector" socket-binding="http-remote-jms">
                <param key="http-upgrade-endpoint" value="http-acceptor"/>
             </http-connector>
          </connectors>
                    ...    
       </hornetq-server> 
    </subsystem>
    

    AND

    <socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
    ...
       <outbound-socket-binding name="http-remote-jms">  
          <remote-destination host="YOUR_REAL_HOSTNAME" port="${jboss.http.port:8080}"/>  
       </outbound-socket-binding>   
    </socket-binding-group>
    

    Note that I’m not using https because I could not do Wildfly work with https for JMS.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.