2

I have an iOS app that pulls user data down from a web service and saves it to a sqlite backed core data store. While this is also going on I'm taking the url for the user's avatar and saving the image locally into core data as the download of each image becomes available.

Each of these are performed on their own thread with their own managed object context but with the same shared persistent store coordinator.

When there are just a few items being pulled down, this works great, but as we approach 100 items or so I get frequent deadlocks on the initial load of data. When I pause the debugger I see the different threads usually both waiting on an executeFetchRequest.

I turned on the sql debug in my scheme and according to the console output the fetches complete, but the threads never continue.

What else can I use or look at to examine why these are deadlocking or how I can prevent these deadlocks from occurring?

4

2 に答える 2

2

To answer your exact question of: "What else can I use or look at to examine why these are deadlocking or how I can prevent these deadlocks from occurring?", I can respond with only a couple of thoughts that are very general because the problem you are approaching is hard:

  • Add NSLog statements from everywhere you believe locks may be occurring. I find that using NSLog(@"%s ...YOUR DEBUG STATEMENT", __PRETTY_FUNCTION__) can help you identify exactly which function is being called, and which queue it is being executed from.

If we move beyond the question of what tools are available (which is basically nothing useful), then I am left with some suggestions about how to debug the system you describe:

As a bit of background: I have built smooth scrolling UITableViews with 2000+ dynamically downloaded and displayed variable-height cells with images which contains absolutely no jitters or delays due to data processing or drawing. This system originally was designed using CoreData, but eventually we moved to straight SQLite to get around exactly the problems that you are encountering with multithreading and parallelism. I am not advocating the switch to SQLite -- it was a decision we made internally to improve speed and reduce inconsistency in the local database. I state this as context to my answer.

I would start by looking hard at your use of Grand Central Dispatch. If you use any dispatch_sync calls, then make sure that they cannot occur in chains that block the operating thread. I did this originally to ensure that multiple threads did not access one of the managed object contexts at the same time, and only discovered the problem after a couple of hours of debugging. These can sneak up on you because the dispatch_sync calls may be deep in functions called by other functions.

I did end up using a transactional system (very SQL-y) with dynamically created queues for individual queries/updates that would ensure that too many operations would not occur at once. I also had a completely different system that would use a separate serial Read queue for quick reads to the DB. This queue's MOC would be mirrored from the other MOCs when possible. This was heavy, and very slow comparatively. Because the system was largely processing on background threads, this was isolated from the user.

CoreData is generally very difficult to multithread. SQLite is a little simpler, though you have to build a lot of application-specific architecture around it to make it usable.

If you would like to post additional detail about your system, I may be able to help more specifically. I hope this has been helpful.

于 2012-10-12T04:09:23.743 に答える
2

Make sure that you batch changes to the DB together. No need to call saveContext after each change. Instead it is probably sufficient to save in the temporary background context after every dozen or so changes.

Please see this article of mine on some pointers: http://www.cocoanetics.com/2012/07/multi-context-coredata/

于 2012-10-12T11:38:32.537 に答える