Why implementing another Caching System
Caches are mostly used to speedup data loading from other sources or to show data without calculating all things again and again. So you can save resources (mostly server speed) or speed up server responsivnes on the cost of memory (RAM or HDD) and realtime. Most existing caching systems take the output and/or the return of a function and saves this, if it doesn't exists inside the cache or timed out else it returns the content of the cache. Thats simple and mostly all. But what happens if the function, to fill the cache, requests data from a system which ...
- ... isn't realy fast?
- ... is remote and has connection trouble?
- ... is a big database query
Another problem is the timeout of an unreachable data server (remote sql server, soap webservices, ...), your cache will be empty. And every user request will fall in this timeout. Why don't return older data from cache (if available) and why don't save the state of error and run again and again in it? Maybe every time send an E-Mail with error message to the administrator.
So this Caching System will have following:
- Looking of cache if "generating" mode (Second request will polling for data). [Implemented in 0.0.9]
- Return old data from cache if an error occoured. [Implemented 0.0.9]
- Clear cache data. [Implemented 0.0.10]
- Save error state of last data request so it doesn't try to get data from it. [Not yet implemented, planned for 0.0.12]
- Fine grained system to clear cache data. [Not yet implemented, planned for 0.1.1]
The DataLoader of ForwardFW will be the first "user" of the cache. As he can handle Data gathering from remote databases or SOAP webservices.