Discussion:
Synchronous Splitter
Alberto Aresca
2012-06-28 02:13:47 UTC
Permalink
Hi, you can use the brand new foreach message processor introduced in Mule 3.3
<br /><br />
You can find the doc here:
<br />
<a href="http://www.mulesoft.org/documentation/display/MULECDEV/Foreach+Doc" rel="nofollow">http://www.mulesoft.org/documentation...</a>
<br /><br />
HTH,
<br />
Alberto

---------------------------------------------------------------------
To unsubscribe from this list, please visit:

http://xircles.codehaus.org/manage_email
Danny Cheng
2012-06-28 18:42:54 UTC
Permalink
Hmm, I can't find the schema for 3.3 yet, but is it possible to put processor-chain inside foreach? Because it is still acting like splitter where each element in the array is just being spawn in different threads. Basically I need foreach to act sort of like a for loop where the "index" or in Mule's context, the next message inside the array is processed after the first one is done.
<br /><br />
I am querying the database for a list of persistent messages, and I need to send them in a proper sequence. On top of that, whether the next message in the list is sent is dependent on whether the one before was sent successfully or not. Right now I am running into race conditions with the asynchronous behaviour.
<br /><br />
Thanks.

---------------------------------------------------------------------
To unsubscribe from this list, please visit:

http://xircles.codehaus.org/manage_email
Danny Cheng
2012-06-28 19:39:55 UTC
Permalink
I think I know what my problem is. I have along the lines of...
<br /><br />
<code>
<br />
&lt;foreach&gt;
<br />
&lt;until-successful&gt;
<br />
&#46;&#46;&#46;&#46;
<br />
&lt;&#47;until-successful&gt;
<br />
&lt;&#47;foreach&gt;
<br />
</code>
<br /><br />
Since until-successful is asynchronous, foreach thinks that the first iteration is done and therefore the second iteration was processed even though the first wasn't done yet.

---------------------------------------------------------------------
To unsubscribe from this list, please visit:

http://xircles.codehaus.org/manage_email

Loading...