first of all I'm also thinking that injecting the model in the
mediator is not the very best option. If your mediator needs data
and this data is saved in the model there is no need to send the
whole model. You can always dispatch a custom event with payload
that holds the necessary stuff. I don't think that calling a method
of the service from the model is a good idea. It's because in this
case you should use this model with only this service or at least
you should have a method with absolutely the same name in all your
services. The custom events are really helpful here. Let's take
your example (with the shopping card). When the data is received by
the service better simply dispatch an event which could be caught
by the model. The model parses the data and fills itself with VOs.
If you want to perform some action on a specific VO from the model,
i.e. to delete or add a new item to the card use the shared event
dispatcher. For example: dispatch(new
In this case the class (command, service, mediator or something
else) doesn't care who will catch this event. Only the model listen
for that and update its state. And at the end you have
- service that is not tightly connected to the model - model that
only listen for events, i.e. doesn't depend from the service
Support Staff3 Posted by Stray on 18 Feb, 2011 10:05 AM
It's perfectly possible for the model to run checks on the data
it is passed through its setters - whether as a vo or as individual
values. To pass a vo to the model my method is usually something
And because I use Commands, if the values need to be checked
(because they come from a user input form for example) then usually
there is a form validator that is used in the Command before the
data is submitted to the model. But the model can do its own checks
on values as well and usually does.
So... I don't see that there is problem with model encapsulation
In fact, I couldn't find anywhere in the thread that talked
about encapsulation at all... I know we talked about 'state plus
logic' but I'm not sure that's the same thing as encapsulation - I
was more trying to define the edges of what is / isn't a
To be honest my objection to injecting models into mediators is
based on a combination things:
1) Because mediators are created and destroyed every time the
view they mediate is added / removed, it is sensible to make our
mediators as lightweight as possible.
2) If you inject a model into a mediator then it is very
tempting to begin to use the model in more and more complex ways in
order to 'quickly' do things.
3) If you wind up with logic in your mediator layer then your
mediators are no longer lightweight.
4) The actual logic that makes your application work is now
dispersed in various mediator handler functions around your
application. You probably have repetition.
5) Usually handlers are not well named - we have
"submitClickedHandler" and so on, and this means that somebody
reading the code (a colleague or even yourself a few weeks later)
actually has to read the detail of what the code does to know what
is being done. (Compare this with proper method naming such as
6) Once you have logic in your mediators it's also very tempting
to add state to your mediators. If you do add state, and then you
pass this state to the model, it's possible to end up interfering
with garbage collection if you forget to clean up after yourself in
onRemove() . If you do everything via the eventMap then there are
no garbage collection problems.
7) If you also need to listen for update events on the model,
but you inject it as well, you end up doing the same 'work'
(updating view to match model state) in 2 different ways.
8) Making changes is more difficult because your application
logic and your view behaviour are now mixed in one class, and if
you decide that the view should arrive later, or your model should
arrive later, you run into timing problems.
9) Your mediator now has 2 responsibilities, and if either your
model API or your view API change you have to change your
To be honest it's mostly based on experience. Shaun and Joel
would both see injecting the model into the mediator as acceptable,
provided you then restrain yourself in how you work with it. In my
experience people aren't so good at that kind of self-restraint :)
When I first started using robotlegs I thought that it was ok to
inject a model into a mediator - but I eventually changed my
I've got a very large robotlegs project myself, and I've helped
out others with large robotlegs projects where they've got a
problem and the source of hard-to-fix problems is usually
On a small project you can probably do anything you like and get
out before the technical debt catches up with you - but in a
multi-month or even multi-year project you have a responsibility to
the client to be able to keep making changes in an efficient
So - on to the suggestion you made...
As you say - and Krasimir further illustrates - the problem with
having the model call the service is that then the model is coupled
to the service - and maybe the model is also doing two jobs
(because it is now concerned with running updates as well).
(The view in this diagram represents the whole view layer - so
mediators + actual views).
I almost always use a factory to process data coming in from a
service, and the factory will then update the model - but not
The service only knows about the factory interface API, the
factory only knows how to build or update the model, and the model
knows nothing about anybody else.
Which is quite nicely decoupled IMO. The service has no
dependency on the model - only on 'something' that is going to
process its result.
I could even further decouple this I guess by using a promise
between the service and the factory... that's an interesting
I'd disagree with what Krasimir says about the model listening
for events - in my world models and services only dispatch events,
they don't listen. And only mediators and commands respond to
events, though they dispatch them too.
I'm talking about application events obviously - the service may
well listen to the events of a Loader, but it doesn't listen for
But really the idea is the same - you minimise coupling by
having events glue everything together instead of API. The only
classes that use model and service API are Commands and
Factories/helpers, and the view API is used only by its
Yep, Stray is right. It is better to use command to populate the
model with the received data, instead of listening for event coming
from the service. It will be better for tracing the processes,
because we know where exactly comes the data.
@Stray what you think, where is better to place the parsing of
the data. I mean if we have a xml that is loaded by service and has
to be parsed: 1. Send the xml to the model, which parse it and
convert it to VOs
2. Parse the xml in the command (that is fired when the data
arrives), convert it to VOs and send these VOs to the model
Support Staff5 Posted by Stray on 18 Feb, 2011 11:09 AM
I do neither - my services have a factory injected against an
The factory usually has a single public method like
processXML - or if I am using the same factory for
several services because there is a lot in common it might have
processUserXML and processContentXML.
Usually the factory makes use of some helpers specific to each
data type like UserXMLToModelProcessor and so on. The builder/helpers
only have two functions: buildFromXML (or buildFromJSON or
whatever), and a getter for an errorMessage. If the data comes back
null then the factory gets the errorMessage and dispatches a custom
error event. The application (through Commands) then decides
whether it needs to worry about this error or not - sometimes it's
important, sometimes it's not critical.
So the factory takes the XML it was passed by the service (or
sometimes URLVars) and it splits it into the pieces to be processed
and uses the various data-type-specific builders to turn it into
the data objects to use to populate the model.
So - the service only knows how to load the data. The factory
only knows how to process the data. If the factory needs to create
more than one kind of object then each creation is done with a
helper class that only knows how to build that particular
This probably best illustrates the relationship between the
Factory and the actual builders:
public function processCompanyData(dataXML:XML):void
var dataModelBuilder:CompanyDataModelXMLBuilder = new CompanyDataModelXMLBuilder();
var dataModelVector:Vector.<CompanyDataModel> = new Vector.<CompanyDataModel>();
for each(var itemXML:XML in dataXML.d)
var dataModel:CompanyDataModel = dataModelBuilder.buildFromXML(itemXML);
if(dataModel == null)
var dataSetModel:CompanyDataSetModel = new CompanyDataSetModel(dataModelVector);
protected function completeInjections(instance:*, instanceClass:Class):void
You can see here that I'm using the injector to wire things up,
and that I've abstracted this part to another function - that's the
beauty of injecting against the factory interface in the service.
If I wanted to move this outside of robotlegs I would only have to
override the function that does the injectInto / mapValue part. All
the logic of creation of the company data etc is sealed off from
The service sends updates during loading. The factory will send
error events if the data is broken. The model sends update events
after it has been updated. The helper classes that build each data
type aren't framework-connected - they are short lived objects that
just get used to turn a piece of XML into an object and then are
The advantage of this approach is that you can use the same
process each time. I have an app that allows people to edit and
administrate many different kinds of data on their system. The
basis for the loading, saving and deleting services for each
different type of data is entirely abstract - and then there are
only tiny variations in terms of which actual factory, and what
URLVars are sent etc.
All of that variation is done through a look-up based on the
type of data that is requested / saved / deleted. Obviously that
type is a Class and not a magic string :)
It's just abstract factory / template method patterns in
combination I think. I like it a lot because once it's working it
just keeps working - the process is tested, and you only have to
add and test each of the little variations.
The only part that I'm not happy with is the use of the check
against 'null' to decide whether the builder hit an error. I'm sure
there is a better way to do this, but I haven't worked out what
that is yet! Any input there gratefully received...
I guess I could pass a promise to the builder and use handlers
on the onResult , onFault etc, but as this part of the process is
currently synchronous I'm not 100% sure about that in terms of how
it could work in a loop.
Sorry... that's a very long answer to a short question!
Sounds like a good approach to let the factory based on the VO
to submit which query strings needs to be send along. This way you
could keep service the same and just swap out the factory when one
small change is needed by a single project.
For example, instead of only sending firstname/lastname you
should also send a unique identifier and an email address. This
should save a lot of overrides.. This URLVariables factory only
needs to be aware of the model the VO data of course.
In a few of our game projects at work we just slightly need to
change which data gets send to the game server based on the score
submission form. E.g. address or not, uid for use with social