Solved

MRP Socket Disconnect Error

  • 18 March 2022
  • 7 replies
  • 538 views

Userlevel 4

10.2.700

 

We have our evaluation upgrade server in place, and it’s running all programs normally and printing is working as well.  But when we run MRP, we receive the error below.  I searched this forum for the keywords, and it was from a company upgrading from 9.05 to 10.2, but unfortunately they only posted basically “we found the answer” without giving the technical details.  Have any of you ever done an upgrade, and trying to run MRP in the new system, had the socket refuse to process?  It’s sticking on “deleting transfer order suggestions” which I think is the first thing it tries to do.

 

Thanks

...Monty.

"E102700TEST": A communication error occurred trying to run task ID 965021 for agent "SystemTaskAgent" on the application server (User: "mwilson", Task Description: "Process MRP").
If this continues to happen investigate if you need to increase the receive and send timeouts in your web.config.
Error details:
System.ServiceModel.CommunicationException: The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '1.00:00:00'. ---> System.IO.IOException: The read operation failed, see inner exception. ---> System.ServiceModel.CommunicationException: The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '1.00:00:00'. ---> System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host
   at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags)
   at System.ServiceModel.Channels.SocketConnection.ReadCore(Byte[] buffer, Int32 offset, Int32 size, TimeSpan timeout, Boolean closing)
   --- End of inner exception stack trace ---
   at System.ServiceModel.Channels.SocketConnection.ReadCore(Byte[] buffer, Int32 offset, Int32 size, TimeSpan timeout, Boolean closing)
   at System.ServiceModel.Channels.SocketConnection.Read(Byte[] buffer, Int32 offset, Int32 size, TimeSpan timeout)
   at System.ServiceModel.Channels.DelegatingConnection.Read(Byte[] buffer, Int32 offset, Int32 size, TimeSpan timeout)
   at System.ServiceModel.Channels.ConnectionStream.Read(Byte[] buffer, Int32 offset, Int32 count)
   at System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count)
   at System.Net.Security.NegotiateStream.StartFrameHeader(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest)
   at System.Net.Security.NegotiateStream.ProcessRead(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest)
   --- End of inner exception stack trace ---
   at System.Net.Security.NegotiateStream.ProcessRead(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest)
   at System.Net.Security.NegotiateStream.Read(Byte[] buffer, Int32 offset, Int32 count)
   at System.ServiceModel.Channels.StreamConnection.Read(Byte[] buffer, Int32 offset, Int32 size, TimeSpan timeout)
   --- End of inner exception stack trace ---

Server stack trace: 
   at System.ServiceModel.Channels.StreamConnection.Read(Byte[] buffer, Int32 offset, Int32 size, TimeSpan timeout)
   at System.ServiceModel.Channels.SessionConnectionReader.Receive(TimeSpan timeout)
   at System.ServiceModel.Channels.SynchronizedMessageSource.Receive(TimeSpan timeout)
   at System.ServiceModel.Channels.TransportDuplexSessionChannel.Receive(TimeSpan timeout)
   at System.ServiceModel.Channels.TransportDuplexSessionChannel.TryReceive(TimeSpan timeout, Message& message)
   at System.ServiceModel.Channels.SecurityChannelFactory`1.SecurityDuplexChannel.TryReceive(TimeSpan timeout, Message& message)
   at System.ServiceModel.Dispatcher.DuplexChannelBinder.Request(Message message, TimeSpan timeout)
   at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout)
   at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation)
   at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message)

Exception rethrown at [0]: 
   at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)
   at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)
   at Ice.Contracts.RunTaskSvcContract.RunTask(Int64 ipTaskNum)
   at Ice.Proxy.Lib.RunTaskImpl.RunTask(Int64 ipTaskNum) in C:\_Releases\ICE\ICE3.2.700.0\Source\Shared\Contracts\Lib\RunTask\RunTaskImpl.cs:line 155
   at Ice.TaskAgentCore.ServiceCaller.<>c__DisplayClass34_0.<RunTask_RunTask>b__0(RunTaskImpl impl)
   at Ice.TaskAgentCore.ImplCaller.RunTaskImplCaller`1.<>c__DisplayClass4_0.<Call>b__0(TImpl impl)
   at Ice.TaskAgentCore.ImplCaller.RunTaskImplCaller`1.Call[TResult](Func`2 doWork, ExceptionBehavior communicationExceptionBehavior, ExceptionBehavior timeoutExceptionBehavior)
   at Ice.TaskAgentCore.ImplCaller.RunTaskImplCaller`1.Call(Action`1 doWork, ExceptionBehavior communicationExceptionBehavior, ExceptionBehavior timeoutExceptionBehavior)
   at Ice.TaskAgentCore.ServiceCaller.RunTask_RunTask(Int64 sysTaskNum, ExceptionBehavior communicationExceptionBehavior, ExceptionBehavior timeoutExceptionBehavior)
   at Ice.TaskAgentCore.ScheduleProcessor.CallServiceAction(SysTaskRow sysTaskRecord, SysTaskParamRow companyParamRecord, ServiceCallArguments serviceCallArguments)

icon

Best answer by mwilson 23 March 2022, 21:31

View original

7 replies

Userlevel 3

Monty,

Yes, it was a problem in the system agent:

Make sure these credentials are populated and that the user has administrator privilege's on the server.  If that works, I can question my admin if the permissions can be reduced at all if that’s a concern.

I’ve seen problems like this where they had to increase the limit in the webconfig file.    Also, check your Process MRP settings.    Number of MRP Processes and Number of MRP Schedulers should not exceed your hardware.    Finally, check to see 1) the last log created.   2) other logs which might contain warnings or errors.    Too many errors might cause the process to lose threads until it can no longer run.    Example: Inactive components in jobs.    Two MRP runs at the same time in the system agent.    Record locks….   I hope this helps.

I don’t now how comfortable you are doing tracing, but we found transfer orders were causing MRP to fail in 10.1.600.  I don’t know if that was fixed in 10.2 or at what version.  We ended up not using transfer orders at all because of MRP failing, the way transfer orders handle serial numbers (have to put job to stock before you can ship), and configured parts (not at all).  We went to PO/SO.

Good luck!

Jenn

Userlevel 4

@kcote Thanks Keith!  We’re checking on this now.

 

@gpennington Thanks much Greg!  I’d been told by Epicor that there was no benefit, but also no harm, in overstating these numbers.  Sounds like you’ve had the opposite experience and we’ll definitely try 1/1 if the above doesn’t work.

 

@jenn.lisser Jenn, thanks much!  I didn’t realize these could be an issue, and in fact the company in question doesn’t need transfer orders.  May I ask how you disabled them for MRP?

 

Thanks one and all,

...Monty.

I thought we eliminated the transfer definitions on the plant (site) configuration/maintenance and made sure none of the parts were flagged as transfers.  I can’t remember if we got a special fix program or not though.

Userlevel 4

We had a case open on this on a prior upgrade: It is necessary to clear-out the URL box on System Agent Maintenance / Detail / Appserver URL.  Then clear the stuck MRP (if still stuck) kick it off again, and it runs normally.  We don’t know why this works but it seems to be running normally now.

 

Thanks all!

...Monty.

System Agent Maintenance panel showing URL to be deleted

 

Monty, I know this is an old thread, but I wanted to ask if the issue caused failure all the time or sporadically? We have good MRP runs, but then it’ll fail for apparently no reason. Also, @kcote mentioned the user name for the System Agent should be an administrator on the server. Is he speaking about being an Epicor administrator, and if so how would that be set up?

Reply