Sunteți pe pagina 1din 29

Enterprise Application

Performance
Distributed Caching

Tito Moreira
Solution Architect - Experts Team
Performance Hurdles

• Application code
○ Slow Queries / too many accesses to database
○ Slow Extensions
○ Large ViewState / Session

• Infrastructure
○ Database
○ Network
Caching helps, right?
Caching helps, right?
There are only two hard things in Computer Science:
cache invalidation and naming things.

-- Phil Karlton
Out-Of-the-Box Caching in OutSystems

• Queries
• Actions
• WebBlocks
• Screens
Out-Of-the-Box Caching in OutSystems

• Queries
• Actions
in-memory process
• WebBlocks (local server cache)

• Screens
Considerations when using local server caching

• It shares resources (memory) with other apps in the same OutSystems


Front End
• Data in cache is not consistent across ≠ servers
• Not fitted to store hundreds of Megabytes of data
• It’s entirely managed by OS platform
○ Developers cannot control the cache entry keys
○ It is not possible to store local variables, e.g. lists of Structures
○ Cache invalidation mechanisms is somehow limited
• Does not escalate well with the number of Servers
○ First hit in each local Server cache is always a “Miss”, however this can
be dealt with using Warm-up procedures.
Data consistency using local server caching
What is Distributed Caching?
Distributed Cache concepts

• Stores the cache on dedicated infrastructure resources


○ Distributed cache has different scalability needs
• Maintains the infrastructure server caches synchronized
○ Every server in the distributed cache infrastructure should have the same data for
a cache entry.
• Makes the cached data remotely available to all Front-Ends in a
transparent way
○ Front-Ends don’t have any knowledge about the distributed cache infrastructure
• It is complementary to the OutSystems built-in cache
○ Distributed cache does not replace the local cache, it is used in addition to it in
order to overcome certain local cache limitations inherent to that approach (e.g.
cache data coeherence).
General Distributed Cache Infrastructure
Internal Network
HTTP Requests
Cache protocol (over TCP/IP)

User Load Balancer

(haProxy or other)
Patterns to Populate a Distributed Cache

• On Demand / Cache Aside / Read-Through


○ The application tries to retrieve data from cache, when there’s a “miss”
the application is responsible for storing the data in the cache so it will be
available next time.
○ To implement Write-Through the Cache should be updated whenever the
records are.
Tries to read from
cache Distributed Cache
CacheConnector
2 infrastructure
1 4
MyApp (UI) DataServices_CS Data is updated in Cache, to
be available on next access
Consumes data 3
related Actions Encapsulates Entities, DB On cache miss, data
providing Read or Write user Actions is read from DB
Patterns to Populate a Distributed Cache

• Background Data Push


○ Timer background Action “pushes” data into the distributed cache on a
regular schedule. Any consumer application pulls the same data from the
cache without being responsible for updating the cache data.

Updates cache on a
regular interval 1 DB
CacheSync_CS

Distributed Cache
CacheConnector
2 infrastructure
MyApp (UI) DataServices_CS
3 Writes should
Consumes data Encapsulates Entities,
related Actions providing Read or Write user Actions invalidate cache!
Patterns to Populate a Distributed Cache

On-Demand Background Data Push

High frequency of data Good Bad


change (cache can be updated immediately on the Write (background process makes high frequency
use Action) cache updates not feasible)

Exposed Write operations Good Bad


(cache is updated on demand) (there might be conflicts between Write
operations and background process - locking
required)

Performance on first Bad Good


access (cache miss requires a DB read and cache (there shouldn’t be any cache misses - all data
update) should be cached ahead)

Cache of large blocks of Bad Good


Data (small amounts of data only, since caching is (caching of big chunks of data is done
done synchronously on cache misses) asynchronously)
Benefits from using Distributed Caching

So what? How can I benefit from it?


• Access cached data from anywhere
○ Actions, Extensions, external applications, etc
• Get stats about what is stored
• Relief data from Session
• Store significant amounts of pre-processed data
○ Yes, Gigabytes of Query data!
• Load cache data from background processes
○ It opens an entire spectrum of initialization possibilities
• It’s easier to scale cache Servers than DB servers
Distributed Cache

• Get stats about what is stored (most providers)


When to use Distributed Cache?
When to use Distributed Caching

• Don’t use it if:


○ There are just a few Front End Servers (1 or 2)
○ Your Apps won’t have a significant amount of traffic
○ Your Apps don’t suffer from performance issues
○ You want to replace OutSystems local cache function entirely
■ Distributed Cache is a complementary component, and should be used in
very specific scenarios!
When to use Distributed Caching

• You should consider using it if:


○ You have more than 3 Front End Servers and you might need to
scale even further
○ You have public-facing Web apps that display “static” data.
○ Your data changes often making local caches invalid
○ You need 100% control over the cache
○ You need to share state between Servers without using Database
or Session
Recommended way to deploy a
Distributed Cache
Distributed Cache deploy recommendations

• Don’t install Distributed Cache services in OutSystems servers


• Use a different infrastructure for the Distributed Cache servers
• If in the OS Cloud, it’s advised to use AWS Elastic Cache in the same
VPC
• Plan for the Memory and CPU requirements of the Distributed Cache
servers
○ Requests/sec, Reads/Writes, Size of cached data
• Keep the Distributed Cache servers in the same network as the OS
servers
○ Without firewalls, proxies or similiar in between ( < latency)
Managing Distributed Cache

• Data remains cached even after a release (different infrastructure)


○ Not managed by Lifetime (Lifetime plugin for Cache purge?)
• Cached data should to be purged whenever there is a release
○ Data model might have changed
○ Data from previous release might be incompatible with latest release
○ Cached data requirements might be different for the new release
• Cached data initialization is possible with external processes
• Distributed Caching locking mechanisms depend on implementation
(Redis ≠ Memcached)
• Distributed Caching resources should be monitored independently of
OS Front-Ends
dmCache
a Distributed Cache Connector
Introducing dmCache

• dmCache is a Forge component that:


○ Provides actions to store/read OS data types and Records
○ Abstracts the developer from the Distributed Cache protocol and implementation
○ Helps the developer generate Cache entry keys:
■ Global (viewable by all applications)
■ Application
■ Session
■ Web Request
Using dmCache
Supported Cache Providers in dmCache

• Memcached
• Redis
• Couchbase
• AWS Elastic Cache
• Azure Redis Cache
dmCache in Action!
(Demo)
We’ll be back in 5 min to answer
your questions
Thank you!

S-ar putea să vă placă și