Why I Will Not Use AutoMapper

Why I Will Not Use AutoMapper
   AutoMapper is a library used to copy the properties from one object to another. Developers usually need this to copy data from their entities to their view models or DTOs. I once evaluated AutoMapper for a project and decided that I do not want to use it. Now I am working on a project where I am not the one choosing the tools so I get to use it. I have now confirmed that using AutoMapper is the wrong choice for most projects but interestingly the real-world issues with it were different from what I expected.

The Use Case

   First let's look into the need for AutoMapper. In most project there is a need to separate your business objects (entities) from the objects you send over the wire (dtos) or the objects you use to feed your MVC views (view models). Consider a Product entity that has a Company property and the Company has a Name property of type string. When sending this object over the wire you may want to omit some of its properties for security or performance reasons. In addition the code that consumes your service or your view may not care about all properties of a Company when requesting a Product and it may be optimal to send a Product with a CompanyName property instead of a composite object. In these cases we introduce special classes for the purpose of transfering the data. A ProductDto can easily have a CompanyName property and omit properties as needed. When we architect things this way (and we should) we end up with a lot of code looking like this:

productDto.Name = productEntity.Name;
productDto.Price = productEntity.Price;

Enter AutoMapper

   AutoMapper allows you to replace all this code with a single line of configuration code and a single line of mapping code that looks like this

   Mapper.Initialize(cfg => cfg.CreateMap<Product, ProductDto>());

   //usage assuming mapper is injected via DI
   ProductDto dto = mapper.Map<ProductDto>(productEntity);

You will probably need to do the reverse mapping when accepting a DTO for update and create operations. If you have a lot of properties this saves a lot of lines of code.

Where I Thought the Problem Would Be

   When I evaluated AutoMapper for the projects I was architecting I was not happy with the fact that if a property was removed from the entity the compiler would not catch the error. Imagine that you remove the Price property from the product - the manual mapping will result in a compile-time error while AutoMapper will only complain at runtime (at least it complains). The proposed solution to this is unit tests. I will take a compile-time error over unit tests that I have to write and maintain any day. However I have to admit that in practice this is not much of a problem. On my current project I never encountered this situation although it is possible in practice.

What the Problem Was

   The actual problem I encountered was much more serious.  It turns out AutoMapper solves a problem we do not have but brings actual problems. If I think for a moment about where I waste most of my time when developing I come to a simple conclusion – it is the libraries that waste most of my time. Most of my time is spent either trying to bend a library to my will or chasing a bug because a library I am using is not behaving as I expect. Every library has this kind of cost and if I choose to use a library it better have a low cost and high payoff. If I decide to use a library then I must think that the cost of using it is lower than the payoff. The cost of using AutoMapper is admittedly low. Configuring it is not that hard and it can do almost anything I can think of in terms of mapping. However the cost is definitely non-zero. I have been in multiple situations where a convention-based mapping does not work as expected. I encountered a lot of entities with less than five properties where properties had non-trivial mappings that required configurations via lambdas or ignores which resulted in more code than straight out manual mapping. I have even seen a bug where we spent a lot of time trying to fix the mapping because this is where the problem was manifesting (a field was null) and while it was not AutoMapper's fault at the end  (we forgot to compile part of the project) we wasted time because the added complexity prevented us from identifying the issue quickly.

   So as it turns out AutoMapper introduces complexity that you have to manage, maintain and debug. If we compare this to the manual approach we can easily conclude that AutoMapper is not worth it. It is literally impossible to encounter a hard to debug situation with the manual mapping. Manual mapping is just assigning properties. This is C# 101 everyone understands it and it is extremely easy to track and debug. Of course manual mapping requires a lot of typing but when was the last time that typing was the bottleneck of your development speed? In the 20 minutes that I spent debugging a single AutoMapper issue (or what I thought was an AutoMapper issue) I can write 400 trivial property assignments (assuming speed of 1 assignment per 3 seconds) in the same time and this is disregarding the configuration of AutoMapper that also takes some time. Assigning properties is simply not a problem. It requires a lot of lines of code but the code is so trivial that it is not a problem to write and not a problem to debug. This approach has almost zero cost. As low as AutoMapper's cost is it is significantly higher than simply writing manual mapping code.

What I Do Instead

   Here is some DTO code from a real world project.

   public class ProductDto : BaseDto<Product>
       public ProductDto()

       public ProductDto(Product product)
           ProductID = product.SalaryID;

       public int ProductID { get; set; }
       public string Name { get; set; }
       public decimal Price { get; set; }
       //more properties

       public override void UpdateEntity(Product productEntity)
           productEntity.Name = Name;
           productEntity.Price = Price;
           //more properties

       public override void CopyEntityData(Product productEntity)
           Name = productEntity.Name;
           Price = productEntity.Price
           //more properties

The BaseDto<T> class is needed only when you want to use the mapping in some generic logic that you want to reuse for all DTOs. I have added it here for completeness because this is how I tend to use it in real world code. Here is what it looks like

   public abstract class BaseDto<T>
       public abstract void UpdateEntity(T entity);
       public abstract void CopyEntityData(T entity);

The constructor of the Dto is used when we get an entity from the database and want to create a DTO to return to the client. The UpdateEntity method is used for reverse mapping in Update operations. The CopyEntityData is used for create operations that are done in generic code because we cannot use non-empty constructors in generic code. This code is really simple and becomes even simpler if the generic code is dropped and when copying properties we have clean and simple C# code that we can use to flatten properties like the Company.Name in any way we see fit.


   I guess there is some sweet spot somewhere where using AutoMapper is worth it. If your median entity has more than 15 properties with few non-trivial properties maybe you will save more time using AutoMapper than the time wasted to debug and configure it. However in my experience even complex projects tend to have a lot of entities with a small number of properties and a lot of times the mapping is not trivial. I believe in these cases using AutoMapper does not pay off. It solves a problem that I simply do not have and brings the most time-consuming kind of problems – fighting libraries.
Tags:   english programming 
Posted by:   Stilgar
02:16 01.10.2016


First Previous 1 Next Last 

Posted by   Guest (Unregistered)   on   03:14 01.10.2016

Class explosions are also not free.  Adding an enormous amount of overhead with tons of mapping classes comes with it's own cost.  As does the fact that AutoMapper does things like projections and flattening rather cheaply.  The real value comes in when you are mapping objects not just 1:1 but N:N.  I also suggest that claiming "it doesn't solve a problem" is a bit arrogant.

Posted by   Guest (Unregistered)   on   04:07 01.10.2016

Author's solution is an anti-pattern.  AutoMapper can validate your configurations for you.  ex. AssertConfigurationIsValid()

Posted by   Stilgar   on   10:27 01.10.2016

I assume my solution is an anti-pattern because it doesn't have separate mapping classes. I honestly don't see the need to break the dependency on entities in the DTOs but even then we can just introduce mapping classes to do the same thing. As for the explosion of mapping classes I see it in my current project too as these configurations need to sit somewhere.

Posted by   Random Javar (Unregistered)   on   19:51 03.10.2016

Dislamer: I am not .Net developer but I know what AutoMapper is.

In my world we use similar mappers maybe not using reflection but this is not the important bit here.
I like mappers and the reason is tuat sometimes the examples are not that easy.

Of course for 5 fields you can use constructor or explioc mapping in the Stilgar example he can just explitly map the foelds
not surd how this is done in AutoMapper but in orika it is simple as field("field1","field1") this is not very bright but if this propery is gone you will see it.

I like mappers because of examples like:
Imagine you have a product and this Product has Category(categories) and now
on Entity level I may have a reference from Category to Itself and usually on the view you dont want to populate all categories even if in CategorySTO you have a CatrgoryDTO also I dont want to have 5 mappers with deep hierarchy of 4 or 6 or 3 depending on the  design. With Mappers I can jave only one configuration also lets say you have many viess for Product like BasicView shown on a search list view with extra properties, grid view with maybe other and details view and maybe a cart view when the product is added to the cart and order view when you finalize the order and i voice view and each of this show 5 or 9  or 11 properties. In my project on top of that we have promotions view and suggestion view and autocomplete view.
So what we did is guess what we work witg interfaces we have configuration saying how to convert from interface to interface then passing an  etity or Dto implementing all this or other interfaces will convert most of them based on the rules. It is even crazier because we cache this conversion since it turns out to be the slowest bit not because of ORIKA or the mapper no its because of lazy loading  amd now imagine you have the converstion grom your productEntitybto a proxy that implements ICartProductDTO right ?  next time when I need IProductGridDTO that extends ICartProductDTO for that bit of logic we dont need the Entity we may not need the Entity at all because you get the idea. The best part is that you can do this evwn without the person Knowing he just says I need ProductDTO of this type we get all interfaces populate it and for the rest use the AutoMapper which doesnt use reflection but in memory code generation since it is faster .. :)

Mappers help not when you have 10 or 30 entities ... if this is the case we may use SessionInView ani-pattern no for real project with 800 entities and 1300 tables to make everything manually is insane we shpuld have 5 developers for a month to onoy register and implement mappers

P.S. Sorry of ai have typos but I am not on Stilgar machine and the scrolling of comments under mobile phone is the worst thing ever... to have a hardcoded width for textarea is not an anti pattern it is user hostile.

First Previous 1 Next Last 

Post as:

Post a comment: