Download Terracotta Project Issue Tracker

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
[EHC-1116] Remove call to EhCache
ClassLoaderUtil.getStandardClassLoader() as it has been removed in Ehcache
2.8.3+ Created: 14/Oct/15 Updated: 08/Feb/16 Resolved: 08/Feb/16
Status:
Project:
Component/s:
Affects
Version/s:
Fix Version/s:
Resolved
Ehcache Core
ehcache-jgroupsreplication
2.10.0
Type:
Reporter:
Resolution:
Labels:
Remaining
Estimate:
Time Spent:
Original
Estimate:
Environment:
Bug
Rene Zanner
Fixed
None
Not Specified
Attachments:
EHC-1116.patch
MyJGroupsCacheManagerPeerProviderFactory.java
Unknown
Terracotta
Target:
None
Priority:
Assignee:
Votes:
2 Major
Rishabh Monga
2
Not Specified
Not Specified


Java 8
ehcache-jgroupsreplication 1.7
Description
I'm trying to use the "file" property of net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeer
configuration for EhCache and my own JGroups channels.
<?xml version="1.0" encoding="UTF-8"?>
<ehcache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="http://ehcache.org/ehcache.xsd"
name="outmatch-cache" updateCheck="false">
<cacheManagerPeerProviderFactory class="net.sf.ehcache.distribution.jgroups.JGroup
properties="channelName=some-name::file=jgroups-configuration.xml"
propertySeparator="::"/>
<!-- ... -->
</ehcache>
Unfortunately this throws a "NoSuchMethodError", because the factory tries to load the given file via ClassLoa
from EhCache long time ago.
java.lang.NoSuchMethodError: net.sf.ehcache.util.ClassLoaderUtil.getStandardClassLoader()L
at
net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory.createCachePeer
at net.sf.ehcache.config.ConfigurationHelper.createCachePeerProviders(Configuratio
at net.sf.ehcache.CacheManager.configure(CacheManager.java:795)
at net.sf.ehcache.CacheManager.doInit(CacheManager.java:471)
at net.sf.ehcache.CacheManager.init(CacheManager.java:395)
at net.sf.ehcache.CacheManager.<init>(CacheManager.java:270)
at net.sf.ehcache.CacheManager.newInstance(CacheManager.java:1116)
at net.sf.ehcache.CacheManager.newInstance(CacheManager.java:1060)
...
Similar to the fix done for Hibernate (https://hibernate.atlassian.net/browse/HHH-9497), this method call must b
Thread.currentThread().getContextClassLoader().
Comments
Comment by Ramses Gomez [ 21/Oct/15 ]
Any update on this? We have the same problem and we had to downgrade ehcache to 2.8.2 to be
able to use the jgroups distribution. Thanks
Comment by Rene Zanner [ 02/Nov/15 ]
Maybe it helps when you vote for this issue - EhCache with JGroups replication does not seem
to have much attention at the moment
Comment by Ryan Martin [ 18/Jan/16 ]
We're having the same problem when trying to upgrade from Ehcache 2.6.10 to 2.10.1. Are
there any workarounds?
Comment by Rene Zanner [ 18/Jan/16 ]
Two "workarounds" (when you want to call it that way):
1.) Do not use "file", but only inline configuration.
2.) Write your own "JGroupsCacheManagerPeerProviderFactory" which supports "file"
correctly.
I did the second...
Comment by Ryan Martin [ 18/Jan/16 ]
Rene, based on your suggestion I constructed the attached workaround. It looks like a real patch
would need to be generated against
net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory. The obsolete
call to ClassLoaderUtil.getStandardClassLoader() still exists in the latest release of ehcachejgroups3replication, which is 1.7.4.
Comment by Ryan Martin [ 18/Jan/16 ]
Here's a patch against your Ehcache SVN trunk/HEAD (r10208).
Comment by Rishabh Monga [ 08/Feb/16 ]
fix committed to revision 10245 of trunk
Comment by Rishabh Monga [ 08/Feb/16 ]
Fix committed to trunk
[EHC-1113] Bootstrap completes before replication Created: 06/Oct/15
Status:
Project:
Component/s:
Affects
Version/s:
Fix Version/s:
New
Ehcache Core
ehcache-jgroupsreplication
1.7.0
Type:
Reporter:
Resolution:
Labels:
Remaining
Estimate:
Time Spent:
Original
Estimate:
Bug
Flavel Heyman
Unresolved
None
Not Specified
Attachments:
ehcache.xml
Unknown
Terracotta
Target:
Bug Found In
Detail:
Updated: 06/Oct/15
None
Priority:
Assignee:
Votes:
2 Major
Issue Review Board
0
Not Specified
Not Specified
Java 1.7, Tomcat 7.0.50, jgroups:3.6.6.Final, ehcache:2.10.0 (also had the
same result for ehcache-core:2.6.11).
Description
Using bootstrap to keep information loaded across 2 Server Nodes. When doing a rolling restart
a Node always stays up. When I boot the 1st Node up from a deployment, sometimes, not
always, the cache is not loaded back into the 1st Node from Node 2nd where the data is
replicated
JGroupsCacheReceiver|DEBUG||received bootstrap complete:
cache=tokenCache
JGroupsBootstrapManager|INFO ||Bootstrap for cache tokenCache is complete,
loaded 0 elements
JGroupsCacheReceiver|DEBUG||received bootstrap reply:
cache=tokenCache,
key=92b00b82-10f7-41d5-81bf-77cda762d421
JGroupsBootstrapManager|WARN ||No BootstrapRequest registered for cache
tokenCache, the event will have no effect: JGroupEventMessage
[event=BOOTSTRAP_RESPONSE, cacheName=tokenCache, serializableKey=92b00b8210f7-41d5-81bf-77cda762d421, element=[ key = 92b00b82-10f7-41d5-81bf77cda762d421, value=user1234, version=1, hitCount=0, CreationTime =
1444165390000, LastAccessTime = 1444165390000 ]]
JGroupsBootstrapManager|DEBUG||Removed BootstrapRequest [cache=tokenCache,
bootstrapStatus=COMPLETE, boostrapCompleteLatch=0, replicated=0,
asynchronous=true, chunkSize=5000000]
Since Node 2 seems to be sending a "complete" message it doesn't seem like a problem with
timeout, it seems Node 2 just doesn't check or the process is asleep sometimes when it receives
the message.
See "Bug Found In Detail" for version numbers.
(For 2.10.0 I had to do the fix described here to get it to work
http://stackoverflow.com/questions/29298776/how-do-i-integrate-ehcache-2-9-jgroupsreplication)
Comments
Comment by Flavel Heyman [ 06/Oct/15 ]
Forgot to retrieve the beginning of the messaging for the cache:
Cache|DEBUG||Initialised cache: tokenCache
JGroupsBootstrapManager|DEBUG||Scheduled BootstrapRequest Reference cleanup timer with 600
period
JGroupsBootstrapManager|DEBUG||Registered BootstrapRequest [cache=tokenCache,
bootstrapStatus=UNSENT, boostrapCompleteLatch=1, replicated=0, asynchronous=true,
chunkSize=5000000]
ConfigurationHelper|DEBUG||CacheDecoratorFactory not configured. Skipping for 'tokenCache'
ConfigurationHelper|DEBUG||CacheDecoratorFactory not configured for defaultCache. Skipping
'tokenCache'.
JGroupsBootstrapManager|DEBUG||Loading cache tokenCache with local address node1-56727 fro
peers: [node2-21383]
10-062015|16:04:14:025|net.sf.ehcache.distribution.jgroups.JGroupsBootstrapManager|DEBUG||Reque
bootstrap of tokenCache from node2-21383
Comment by Flavel Heyman [ 06/Oct/15 ]
The following a successful bootstrap. It says it loaded 0 elements, but it it lies.
Cache|DEBUG||Initialised cache: tokenCache
JGroupsBootstrapManager|DEBUG||Scheduled BootstrapRequest Reference cleanup timer with 600
period
JGroupsBootstrapManager|DEBUG||Registered BootstrapRequest [cache=tokenCache,
bootstrapStatus=UNSENT, boostrapCompleteLatch=1, replicated=0, asynchronous=true,
chunkSize=5000000]
ConfigurationHelper|DEBUG||CacheDecoratorFactory not configured. Skipping for 'tokenCache'
ConfigurationHelper|DEBUG||CacheDecoratorFactory not configured for defaultCache. Skipping
'tokenCache'.
JGroupsBootstrapManager|DEBUG||Loading cache tokenCache with local address node1-39221 fro
peers: [node2-21383]
JGroupsBootstrapManager|DEBUG||Requesting bootstrap of tokenCache from node2-21383
JGroupsCachePeer|DEBUG||Sending 1 JGroupEventMessages synchronously.
JGroupsCacheReceiver|DEBUG||received bootstrap complete:
cache=tokenCache
JGroupsCacheReceiver|DEBUG||received bootstrap reply:
cache=tokenCache, key=39e16bf84d06-afd8-2765bff1ad71
JGroupsBootstrapManager|INFO ||Bootstrap for cache tokenCache is complete, loaded 0 elemen
JGroupsBootstrapManager|DEBUG||Removed BootstrapRequest [cache=tokenCache,
bootstrapStatus=COMPLETE, boostrapCompleteLatch=0, replicated=0, asynchronous=true,
chunkSize=5000000]
[EHC-1093] Incompatible constructors between JGroupEventMessage and
net.sf.ehcache.distribution.EventMessage Created: 16/Dec/14 Updated: 17/Dec/14
Status:
Project:
Component/s:
Affects
Version/s:
Fix Version/s:
New
Ehcache Core
ehcache-jgroupsreplication
2.9.0
Type:
Reporter:
Resolution:
Labels:
Remaining
Estimate:
Time Spent:
Original
Estimate:
Bug
Ilya Kikoin
Unresolved
None
Not Specified
Terracotta
Target:
Unknown
None
Priority:
Assignee:
Votes:
2 Major
Issue Review Board
0
Not Specified
Not Specified
Description
These are the relevant parts of my ehcache.xml:
<cacheManagerPeerProviderFactory
class="net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory"
properties="channel=ehcache^connect=UDP(mcast_addr=231.12.21.132;mcast_port=45566;ip_ttl=32;
mcast_send_buf_size=150000;mcast_recv_buf_size=80000):
PING(timeout=2000;num_initial_members=6):
MERGE2(min_interval=5000;max_interval=10000):
FD_SOCK:VERIFY_SUSPECT(timeout=1500):
pbcast.NAKACK(gc_lag=10;retransmit_timeout=3000):
UNICAST(timeout=5000):
pbcast.STABLE(desired_avg_gossip=20000):
FRAG:
pbcast.GMS(join_timeout=5000;join_retry_timeout=2000;shun=false;print_local_addr=false)"
propertySeparator="^"
/>
<cache name="sampleRepicatedCache2"
maxEntriesLocalHeap="10"
eternal="false"
timeToIdleSeconds="100"
timeToLiveSeconds="100">
<cacheEventListenerFactory class="net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory"
properties="replicateAsynchronously=true, replicatePuts=true,
replicateUpdates=true, replicateUpdatesViaCopy=false,
replicateRemovals=true,asynchronousReplicationIntervalMillis=1000"/>
</cache>
The synchronization crashes with this:
Exception in thread "main" java.lang.NoSuchMethodError:
net.sf.ehcache.distribution.EventMessage.<init>(ILjava/io/Serializable;Lnet/sf/ehcache/Element;)V
at net.sf.ehcache.distribution.jgroups.JGroupEventMessage.<init>(JGroupEventMessage.java:86)
at
net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicator.replicatePutNotification(JGroupsCacheReplicator.ja
at net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicator.notifyElementPut(JGroupsCacheReplicator.java:
at net.sf.ehcache.event.RegisteredEventListeners.internalNotifyElementPut(RegisteredEventListeners.java:192)
at net.sf.ehcache.event.RegisteredEventListeners.notifyElementPut(RegisteredEventListeners.java:170)
at net.sf.ehcache.Cache.notifyPutInternalListeners(Cache.java:1631)
at net.sf.ehcache.Cache.putInternal(Cache.java:1601)
at net.sf.ehcache.Cache.put(Cache.java:1526)
at net.sf.ehcache.Cache.put(Cache.java:1491)
I checked the source code of these classes, net.sf.ehcache.distribution.jgroups.JGroupEventMessage constructor
the
net.sf.ehcache.distribution.EventMessage constructor:
public JGroupEventMessage(int event, Serializable key, Element element, String cacheName)
{ super(event, key, element); this.cacheName = cacheName; this.asyncTime = -1; }
which is:
public EventMessage(Ehcache cache, Serializable key)
{ this.cache = cache; this.key = key; }
apparently, this can't work. Just to be sure, I extracted the class files and decompiled them, and that is the code.
Comments
Comment by Ilya Kikoin [ 16/Dec/14 ]
After looking through the code, I realized that jgroupsreplication1.7 seem compatible with
ehcache-core2.9.0, since there the JGroupEventMessage inherits the LegaceEventMessage
which looks like a copy from 2.5.0. But I didn't find any build artifact for that version
Comment by Ilya Kikoin [ 17/Dec/14 ]
The only change that needs to be done in jgroupsreplication1.7 is in
JGroupsCacheManagerPeerProviderFactory.java, line 61:
final ClassLoader contextClassLoader = Thread.currentThread().getContextClassLoader();
[EHC-972] JGroups replication simple synchronous RSVP fix Created: 28/Sep/12
Updated:
28/Sep/12 Resolved: 28/Sep/12
Status:
Project:
Component/s:
Affects
Version/s:
Fix Version/s:
Resolved
Ehcache Core
ehcache-jgroupsreplication
2.5.2
Type:
Reporter:
Resolution:
Labels:
Remaining
Estimate:
Time Spent:
Original
Estimate:
Patch
Cedric Vidal
Duplicate
None
Not Specified
Attachments:
None
Priority:
Assignee:
Votes:
2 Major
Issue Review Board
0
Not Specified
Not Specified
EHC-972.patch
Issue Links:
Duplicate
Terracotta
Target:
Ehcache 2.5.2
udp-RSVP-fixed.xml
Description
As mentionned in EHC-874, "Synchronous" JGroups replication is not really synchronous. I
stumbled upon the same problem as Manuel Dominguez Sarmiento but I fixed the problem
using an simpler approach.
As stated in the JGroups documentation, in order for messages to be sent synchronously, you
need two things:


The RSVP protocol declared in the stack above the GMS protocol and under the
UNICAST one.
You also need to set the RSVP flag on the message to send synchrously
The first condition is only met by proper configuration of the JGroups stack in the JGroups
XML configuration file. Note that you need to give your own properly configured file because
the default JGroups udp.xml file puts the RSVP protocol too low in the stack to receive view
change events from the GMS protocol required to properly wait for all cluster members
acknowledges.
A udp.xml based JGroups configuration file which fixes the RSVP protocol position is attached
for convenience.
The second one needs to be carried by the code and the jgroups-replication module doesn't set
that flag.
This simple patch just modifies the JGroupsCachePeer so that it sets the RSVP flag on the
message when the cache level JGroupsCacheReplicatorFactory replicateAsynchronously
property is set to false.
Comments
Comment by Cedric Vidal [ 28/Sep/12 ]
Sorry, I realized I posted this patch to the wrong project, I recreated is here EHCJGRP-9
[EHC-961] JMX MBean for JGroups Message Reciever Created: 29/Jul/12
Status:
Project:
Component/s:
Affects
Version/s:
Fix Version/s:
Open
Ehcache Core
ehcache-jgroupsreplication
None
Type:
Reporter:
Resolution:
Labels:
Remaining
Estimate:
Time Spent:
Original
Estimate:
New Feature
Eric Dalquist
Unresolved
None
Not Specified
Attachments:
EHC-961.patch
Vicente-GA
Terracotta
Target:
Updated: 04/Apr/13
None
Priority:
Assignee:
Votes:
3 Minor
Unassigned
0
Not Specified
Not Specified
EHC-961.patch
Description
Add a JMX MBean to track the number of each type of replicated messages received and the
rate at which they were received.
Comments
Comment by Eric Dalquist [ 30/Jul/12 ]
Updated to use external library for rate tracking
[EHC-927] ERROR
net.sf.ehcache.distribution.RMIAsynchronousCacheReplicator - Exception on
flushing of replication queue: null. Continuing... java.lang.NullPointerException
Created: 13/Feb/12 Updated: 27/Jul/12 Resolved: 14/Feb/12
Status:
Project:
Component/s:
Affects
Version/s:
Fix Version/s:
Closed
Ehcache Core
ehcache-core, ehcache-jgroupsreplication
2.5.1
Type:
Reporter:
Resolution:
Labels:
Remaining
Estimate:
Time Spent:
Original
Estimate:
Bug
Andrey Adamovich
Not a Bug
None
Not Specified
Terracotta
Target:
Ehcache 2.5.2
2.5.2
Priority:
Assignee:
Votes:
2 Major
Chris Dennis
0
Not Specified
Not Specified
Description
I'm trying to configure EhCache with JGroups-based replication, but I get log flooded with the following excepti
element is added to the cache:
12061 [Replication Thread] ERROR net.sf.ehcache.distribution.RMIAsynchronousCacheReplicator - Exception
replication queue: null. Continuing...
java.lang.NullPointerException
at net.sf.ehcache.distribution.RMISynchronousCacheReplicator.listRemoteCachePeers(RMISynchronousCacheR
at
net.sf.ehcache.distribution.RMIAsynchronousCacheReplicator.flushReplicationQueue(RMIAsynchronousCache
at
net.sf.ehcache.distribution.RMIAsynchronousCacheReplicator.replicationThreadMain(RMIAsynchronousCache
at net.sf.ehcache.distribution.RMIAsynchronousCacheReplicator.access$100(RMIAsynchronousCacheReplicato
at
net.sf.ehcache.distribution.RMIAsynchronousCacheReplicator$ReplicationThread.run(RMIAsynchronousCache
Some details are also available here:
http://stackoverflow.com/questions/9228526/ehcache-jgroups-give-exception-on-flushing-of-replication-queue-n
Comments
Comment by Chris Dennis [ 13/Feb/12 ]
From looking at your ehcache configuration on the stack-overflow post it looks like you are mixing
RMI configuration with JGroups configuration.
The first section in your configuration configures JGroups based peer discovery:
<cacheManagerPeerProviderFactory
class="net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory"
properties="jgroups.xml"
/>
However, in the cache section you've configured RMI based replication.
<cacheEventListenerFactory
class="net.sf.ehcache.distribution.RMICacheReplicatorFactory"
properties="replicateAsynchronously=true, replicatePuts=true,
replicateUpdates=true, replicateUpdatesViaCopy=true, replicateRemovals=true"
/>
There is a guide to JGroups based replication here:
http://www.ehcache.org/documentation/replication/jgroups-replicated-caching. For example if you
switch to a section such as this for your caches then you should have more luck:
<cacheEventListenerFactory
class="net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory"
properties="replicateAsynchronously=true, replicatePuts=true,
replicateUpdates=true, replicateUpdatesViaCopy=false,
replicateRemovals=true" />
The NullPointerExceptions you are seeing are not very user friendly however. I'll look in to seeing if
we can fail-fast with such broken configurations.
Comment by Andrey Adamovich [ 14/Feb/12 ]
Thanks, Chris. You are right I probably blindly copied it from another configutaion file. It looks
better now.
[EHC-876] Update JGroups integration to support JGroups 3.0.0 Created:
03/Aug/11 Updated: 28/May/13
Status:
Project:
Component/s:
Affects
Version/s:
Fix Version/s:
Open
Ehcache Core
ehcache-jgroupsreplication
None
Type:
Reporter:
Task
Manuel Dominguez
Sarmiento
Unresolved
None
Not Specified
Resolution:
Labels:
Remaining
Estimate:
Time Spent:
Original
Estimate:
Terracotta
Target:
Documentation
Required:
None
Priority:
Assignee:
2 Major
Unassigned
Votes:
0
Not Specified
Not Specified
Vicente_Holding
Yes
Description
Some minor API changes require patching the integration code. Please see EHC-874 for a patch
that solves one other issue, plus patches the code to support JGroups 3.0.0
[EHC-874] "Synchronous" JGroups replication is not really synchronous Created:
27/Jul/11 Updated: 28/May/13
Status:
Project:
Component/s:
Affects
Version/s:
Fix Version/s:
Open
Ehcache Core
ehcache-jgroupsreplication
2.4.3
Type:
Reporter:
Bug
Manuel Dominguez
Sarmiento
Unresolved
None
Not Specified
Resolution:
Labels:
Remaining
Estimate:
Time Spent:
Original
Estimate:
Attachments:
Terracotta
Target:
Documentation
Required:
None
Priority:
Assignee:
2 Major
Unassigned
Votes:
1
Not Specified
Not Specified
patch-3.0.0.txt
patch.txt
Vicente_Holding
patch-custom-settings-per-cache-3.0.0.diff
Yes
Description
Version 1.4 of the ehcache-jgroupsreplication module has synchronous and asynchronous
replication modes. But actually, BOTH are asynchronous.
This is the deal: internally, the module buffers replication notifications when operating in
asynchronous mode, and send them in bulk every second by default by invoking JGroups'
channel.send() for the whole bulk. When a synchronous notification is required, no bufffering
occurs, and channel.send() is invoked directly.
This might seems correct behaviour, but it's not. The reason is that channel.send() does NOT
block until the replication notification is acknowledged by all peers. It simply dispatches the
message to all peers, without waiting for the replication to complete.
Some use cases require real 100% synchronous operation. We're currently working on a custom
distributed HttpSession management solution based on EhCache + JGroups, and we need to
make sure the puts() are replicated before finishing request processing (the idea is to have
distributed sessions with an optimized SessionDelta updater, and have a plain load-balancer in
front without any container-specific configuration to deal with).
The solution is fairly simple. JGroups provides a MessageDispatcher building block with a
castMessage() method that allows specifying whether we should block until all, some or none
of the recipients complete processing of the message. We have patched the code to allow
configuring two parameters: syncMode and syncTimeout in order to control this behaviour. If
the parameters are not supplied, then we have the current non-blocking, pseudo-synchronous
but really asynchronous behaviour.
We would appreciate peer review from the module maintainers, and if this patch is accepted,
we'd like to see it in ehcache-jgroupsreplication version 1.5
Comments
Comment by Manuel Dominguez Sarmiento [ 27/Jul/11 ]
The attached patch for current version 1.4 implements the proposed solution.
Comment by Manuel Dominguez Sarmiento [ 27/Jul/11 ]
Example config:
<cacheManagerPeerProviderFactory
class="net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory"
properties="file=jgroups.xml, syncMode=GET_ALL, syncTimeout=10000" />
Sync modes are documented in org.jgroups.blocks.Request:






GET_ALL blocks until all peers complete processing
GET_NONE does not block, just like current behaviour using channel.send()
GET_FIRST blocks until the first peer completes
GET_MAJORITY blocks until a majority of the peers (non-faulty members) complete
GET_ABS_MAJORITY blocks until a majority of the peers (including faulty members)
complete
GET_N has not been implemented (seems redundant and non-adaptive to peer groups
changing the number of participants)
Comment by Manuel Dominguez Sarmiento [ 03/Aug/11 ]
This patch does the same as the other one, but applies all necessary changes to be compatible
with JGroups 3.0.0 which was just released and contains some minor API changes.
Comment by Manuel Dominguez Sarmiento [ 29/Aug/11 ]
Attached a new patch which allows setting syncMode and syncTimeout per-cache. The general
setting per cacheManager is still available (default if no per-cache custom values have been set).
[EHC-760] Add JMX support to JGroups Replication Created: 02/Aug/10
Updated:
27/Jul/12 Resolved: 04/Aug/10
Status:
Project:
Component/s:
Affects
Version/s:
Fix Version/s:
Closed
Ehcache Core
ehcache-jgroupsreplication
None
Type:
Reporter:
Resolution:
Labels:
Remaining
Estimate:
Time Spent:
Original
Estimate:
New Feature
Eric Dalquist
Fixed
None
Not Specified
Attachments:
ehcache-jgroupsreplication.patch
Magnum
Terracotta
Target:
Fixed In
Revision:
2.3.0
Priority:
Assignee:
Votes:
2 Major
Unassigned
0
Not Specified
Not Specified
2609
Description
Add support for calling the JGroups JMX registration APIs when the CacheManager is
registered with the MBeanServer. Depends on
Comments
Comment by gluck [ 04/Aug/10 ]
Patch incomplete. There is also a change in the JGroups version required.
Comment by gluck [ 04/Aug/10 ]
Had to significantly change some bits to support required 2.10 version of JGroups
[EHC-609] JGroups 2.8 breaks Bootstrap Created: 20/Jan/10
Status:
Project:
Component/s:
Affects
Version/s:
Fix Version/s:
Closed
Ehcache Core
ehcache-jgroupsreplication
1.7.2
Type:
Reporter:
Resolution:
Labels:
Remaining
Estimate:
Time Spent:
Original
Estimate:
Bug
Anonymous
Fixed
None
Not Specified
Updated: 27/Jul/12 Resolved: 29/Aug/10
None
Priority:
Assignee:
Votes:
Not Specified
Not Specified
Description
write you because I'm not able to add the Issue to your
JIRA. I registered and validated an account at Terracotta,
but I'm not able to log into the JIRA.
I have a serious problem with distributed EHCache via
JGroups. My problem is with bootstrapping of distributed
EHCache with JGroups. I'm using JGroups 2.8, EHCache core
1.7.2 and ehcache-jgroupsreplication 1.3. In JGroups 2.8,
the default implementation of Address was changed from
IpAddress to org.jgroups.util.UUID which causes the JGroups
Bootstrapper to fail with a ClassCastException where it
tries to cast a UUID to an IpAddress without any instanceof
checks.
For now I have to try to switch to RMI for Bootstrapping. Do
I only have to set the bootstrapCacheLoaderFactory for this,
or do I need anything else like a
cacheManagerPeerListenerFactory what I don't need for
JGroups replication and bootstrapping?
2 Major
Unassigned
0
Comments
Comment by gluck [ 20/Jan/10 ]
Reported by Björn Kautler <[email protected]>
Comment by gluck [ 21/Jan/10 ]
Downgrading to JGroups 2.7 fixes the issue.
Comment by gluck [ 29/Aug/10 ]
JGroups replication has been rewritten as of 30 August 2010 and the version of JGroups
updated to the latest version.
This issue is therefore moot.
You will need to get jgroups replication out of trunk or wait for the release of ehcachejgroupsreplication 1.4, expected in October 2010.
[EHC-426] Release each of the modules to use Ehcache 1.7.0 Created: 14/Oct/09
Updated:
14/Jan/10 Resolved: 03/Dec/09
Status:
Project:
Component/s:
Affects
Version/s:
Fix Version/s:
Type:
Reporter:
Resolution:
Labels:
Remaining
Estimate:
Time Spent:
Original
Estimate:
Closed
Ehcache Core
ehcache-debugger, ehcache-jcache, ehcache-jgroupsreplication, ehcachejmsreplication, ehcache-openjpa, ehcache-server, ehcache-web
1.7.0
1.7.2
Task
Anonymous
Fixed
None
Not Specified
Priority:
Assignee:
Votes:
2 Major
Unassigned
0
Not Specified
Not Specified
Comments
Comment by gluck [ 21/Oct/09 ]
This is likely to be delayed to 1.7.1 because of open issues in core.
Comment by gluck [ 03/Dec/09 ]
This was reminder to release what we could once 1.7.1 was done.
Closing the reminder as 1.7.1 is done and the releases are imminent.
[EHC-383] JGroup Replication does not work with decorated caches Created:
22/Sep/09 Updated: 27/Jul/12 Resolved: 30/Aug/10
Status:
Project:
Component/s:
Affects
Version/s:
Fix Version/s:
Closed
Ehcache Core
ehcache-jgroupsreplication
None
Type:
Reporter:
Resolution:
Labels:
Remaining
Estimate:
Time Spent:
Original
Estimate:
Bug
Jon Christiansen
Fixed
None
Not Specified
1.4.0
Terracotta
Target:
Fixed In
Revision:
Ehcache
Priority:
Ehcache
Resolution:
3 Minor
Unassigned
0
Not Specified
Not Specified
Attachments:
Issue Links:
Priority:
Assignee:
Votes:
patch.txt
Cloners
clones EHC-222 JMS Replication does not work with de...
Closed
Magnum
2696
5
Fixed
Description
Using version 0.3 of JMS Replication, the remote peer which listens for cache updates produces the
following stack trace when it receives a request to update a DECORATED cache:
14-Apr-2009 16:47:28 net.sf.ehcache.distribution.jms.JMSCachePeer onMessage
WARNING: Unable to handle JMS Notification: null
java.lang.NullPointerException
at net.sf.ehcache.distribution.jms.JMSCachePeer.put(JMSCachePeer.java:210)
at net.sf.ehcache.distribution.jms.JMSCachePeer.handleNotification(JMSCachePeer.java:132)
at net.sf.ehcache.distribution.jms.JMSCachePeer.handleObjectMessage(JMSCachePeer.java:305)
at net.sf.ehcache.distribution.jms.JMSCachePeer.onMessage(JMSCachePeer.java:246)
at
org.apache.activemq.ActiveMQMessageConsumer.dispatch(ActiveMQMessageConsumer.java:967)
at org.apache.activemq.ActiveMQSessionExecutor.dispatch(ActiveMQSessionExecutor.java:122)
at org.apache.activemq.ActiveMQSessionExecutor.iterate(ActiveMQSessionExecutor.java:192)
at org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:122)
at org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:43)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
at java.lang.Thread.run(Thread.java:619)
The cause is that JMSCachePeer is used to obtain strictly Cache implementations and not the more
general Ehcache implementations from the CacheManager; i.e. it should use
cacheManager.getEhcache(cacheName) rather than cacheManager.getCache(cacheName).
I will submit a patch for this to be used against JMSCachePeer SVN Rev 855.
(If others use this to create their own version of JMS Replication then you should know that JMS
Replication project won't compile against the HEAD version of Ehcache since there are some
changes due to JSR exceptions - perhaps this relates to Bug 2780720?? - anyhow my second patch
file provides a patch to JMSCacheLoader and JMSCacheLoaderFactory)
I have a fix for this which
Sourceforge Ticket ID: 2785071 - Opened By: alphabravo - 1 May 2009 14:24 UTC
Comments
Comment by Jon Christiansen [ 22/Sep/09 ]
Apologies if cloning wasn't the best operation on this issue, but there is a nearly identical issue
with JGroups replication, where replication does not function if your cache is a decorated cache.
Line 88 of method handleJGroupNotification in
net.sf.ehcache.distribution.jgroups.JGroupManager does a CacheManager.getCache(), which
will return null unless the object is strictly a Cache, therefore for DECORATED cache's ends up
being a no-op rather than the NPE that occurred in the JMS implementation.
As a side note, the javadoc in Cache states that SelfPopulatingCache, BlockingCache are
decorators for Cache, but if you look at these Classes, they clearly state in their javadoc that
they are decorators for Ehcache. Perhaps the javadoc can be made more consistent as well.
Comment by Jon Christiansen [ 23/Sep/09 ]
Patch to src/main/java/net/sf/ehcache/distribution/jgroups/JGroupManager.java which allowed
my specific case to work.
Comment by gluck [ 30/Aug/10 ]
Fixed. Now uses Ehcache rather than Cache, so decorated caches can now be replicated.
[EHC-251] JGroups implementation of Bootstrap cache loader Created: 21/Sep/09
Updated:
27/Jul/12 Resolved: 29/Aug/10
Status:
Project:
Component/s:
Affects
Version/s:
Fix Version/s:
Closed
Ehcache Core
ehcache-jgroupsreplication
None
Type:
Reporter:
Resolution:
Labels:
Remaining
Estimate:
Time Spent:
Original
Estimate:
Bug
Sourceforge Tracker
Fixed
None
Not Specified
Ehcache
Priority:
Ehcache
Resolution:
5
None
Priority:
Assignee:
Votes:
3 Minor
Unassigned
0
Not Specified
Not Specified
None
Description
I am using ehcache(ehcache-1.5.0.jar) with JGroups support for distributed caching. Given below are some of th
configurations that I have done to support JGroups distributed caching including bootstrap cache loader.
<cacheManagerPeerProviderFactory
class="net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory"
properties="connect=UDP(mcast_addr=xxx.xxx.xxx.xxx;mcast_port=xxxx :PING:
MERGE2:FD_SOCK:VERIFY_SUSPECT:pbcast.NAKACK:UNICAST:pbcast.STABLE:FRAG:pbcast.GMS"/
<cache name="sampleCache"
maxElementsInMemory="100"
maxElementsOnDisk="100"
eternal="false"
overflowToDisk="true"
timeToIdleSeconds="0"
timeToLiveSeconds="600"
diskSpoolBufferSizeMB="10"
diskPersistent="false"
diskExpiryThreadIntervalSeconds="120"
>
<cacheEventListenerFactory class="net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory">
properties="replicateAsynchronously=false, replicatePuts=true,
replicateUpdates=true, replicateUpdatesViaCopy=true,
replicateRemovals=true"/>
<bootstrapCacheLoaderFactory class="JGroupsBootstrapCacheLoaderFactory"
properties="bootstrapAsynchronously=false,
maximumChunkSizeBytes=5000000"/>
</cache>
I have provided implementations for BootstrapCacheLoaderFactory & BootstrapCacheLoader and the same is
configured above.
Are there any other implementations that I need to provide or any other configurations that I need to make to use
bootstrap cache loader using JGroups implementation?
I found one more issue while looking at the source file JGroupManager.java. Many of the method
implementations return null or do not have any implementation. Has this been fixed? Is there an updated patch
release with these implementations?
Sourceforge Ticket ID: 2376175 - Opened By: rakesh_davanum - 2 Dec 2008 08:39 UTC
Comments
Comment by gluck [ 29/Aug/10 ]
ehcache-jgroupsreplication version 1.4 in trunk now and expected to be released October 2010
will fix this issue.
[EHC-33] multiple Ehcache with Jgroups ina single application failure Created:
21/Sep/09 Updated: 27/Jul/12 Resolved: 13/Oct/10
Status:
Project:
Component/s:
Affects
Version/s:
Fix Version/s:
Closed
Ehcache Core
ehcache-jgroupsreplication
None
Type:
Reporter:
Resolution:
Labels:
Remaining
Estimate:
Time Spent:
Original
Estimate:
Bug
Sourceforge Tracker
Fixed
None
Not Specified
Attachments:
channelName.patch
Magnum
Terracotta
Target:
Ehcache
Priority:
Ehcache
Resolution:
2.3.0
Priority:
Assignee:
Votes:
3 Minor
Issue Review Board
0
Not Specified
Not Specified
5
None
Description
Running multiple EHCache’s in a single application (web-application) , then adding each
one to a different JGROUPS cluster fails.
The reason being that in package: net.sf.ehcache.distribution.jgroups.JGroupManager
The Channel Name ("EH_CACHE") is hard coded.
notificationBus = new NotificationBus("EH_CACHE", connect);
As a suggestion could you make it so that you can specify the Channel name as part of the
properties,
to allow JGroups to uniquely communicate updates to the different caches.
<cacheManagerPeerProviderFactory
class="net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory"
properties="channel=EH_CACHE_1:connect=UDP(...."
propertySeparator="::"
Sourceforge Ticket ID: 2810214 - Opened By: andreasbester - 22 Jun 2009 12:06 UTC
Comments
Comment by gluck [ 30/Aug/10 ]
That seems like a good idea.
Comment by gluck [ 30/Aug/10 ]
In trunk. Will be in ehcache-jgroupsreplication-1.4
Changes as suggested. A channel config has been added, defaulting to EHCACHE.
Comment by Juan G [ 10/Oct/10 ]
Hi all!
I think the shared channel for EHCache is a great improvement, but I can't get it to work.
Environment I am using:
glassfish 2.1.1
EHCache 2.2 (tried with EHCache 2.2.1 and EHCache 2.3.0 too).
JGroups 2.10 (tried with JGroups 2.8 too).
ehcache-jgroups.replication-1.4 from SNAPSHOTS (2010/10/07).
Sample config:
<cacheManagerPeerProviderFactory
class="net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory"
properties="channel=CANAL_1:connect=UDP(mcast_addr=231.12.21.132;mcast_port=45566;ip_
mcast_send_buf_size=150000;mcast_recv_buf_size=80000):
PING(timeout=2000;num_initial_members=6):
MERGE2(min_interval=5000;max_interval=10000):
FD_SOCK:VERIFY_SUSPECT(timeout=1500):
pbcast.NAKACK(gc_lag=10;retransmit_timeout=3000):
UNICAST(timeout=5000):
pbcast.STABLE(desired_avg_gossip=20000):
FRAG:
pbcast.GMS(join_timeout=5000;join_retry_timeout=2000;
shun=false;print_local_addr=true;)"
propertySeparator="::"
/>
And this is what I see in the glassfish log:
INFO: ------------------------------------------------------------------GMS: address=xxxxxxx-xxxxx, cluster=EH_CACHE, physical address=...
------------------------------------------------------------------Am I doing anything wrong? Shouldn't "cluster" be "CANAL_1" in this case (channel property)?
Thanks very much!
Comment by Juan G [ 10/Oct/10 ]
I tried with the config shown in this URL: http://ehcache.org/documentation/configuration.html
The config is this:
<cacheManagerPeerProviderFactory
class="net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory
properties="channel=channeljuan^connect=UDP(mcast_addr=231.12.21.132;mcast_port=45566;ip_t
mcast_send_buf_size=150000;mcast_recv_buf_size=80000):
PING(timeout=2000;num_initial_members=6):
MERGE2(min_interval=5000;max_interval=10000):
FD_SOCK:VERIFY_SUSPECT(timeout=1500):
pbcast.NAKACK(gc_lag=10;retransmit_timeout=3000):
UNICAST(timeout=5000):
pbcast.STABLE(desired_avg_gossip=20000):
FRAG:
pbcast.GMS(join_timeout=5000;join_retry_timeout=2000;shun=false;print_local_addr=
propertySeparator="^"
/>
In this case I suppose I should be able to see in logs "CLUSTER=channeljuan". What I am doing wrong?
Thanks!
Comment by Juan G [ 10/Oct/10 ]
In previous comment the property "print_local_addr" was true, sorry.
Comment by Juan G [ 10/Oct/10 ]
Well, after read properly the config documentacion, I've came across this line:
"Multiple JGroups clusters may be run on the same network by specifying a different CacheManager nam
name
is used as the cluster name".
After setting "name" attribute in ehcache xml now I see that name in logs.
Perhaps I didn't understand correctly the concepts. What I want is to share the same channel between different ca
app. I have some jars (JSF related) which contains some caches. Those caches are constructed independent from
would like to make that caches working in the same channel (same thread of jGroups channel for every cache), w
providing a channel name that they should share to communicate with the other cluster instances.
Is this possible somehow?
Thanks very much!
Comment by Eric Dalquist [ 11/Oct/10 ]
Hi Juan, I did most of the JGroups integration updates in trunk (1.4-SNAPSHOT).
In trunk a single JChannel is created for each CacheManager. The logic to determine the channel name is:
public String getClusterName() {
if (this.cacheManager.isNamed()) {
return this.cacheManager.getName();
}
return "EH_CACHE";
}
So whatever name to specify in your ehcache.xml will be the name of the JGroups channel.
For your usage, how many ehcache.xml files do you have to configure your caches? If it is just one the code sho
creating a single shared JChannel.
Comment by Eric Dalquist [ 11/Oct/10 ]
Also adding a CHANNEL_NAME property seems like a good enhancement. It would simply get added to the cl
resolution logic:
public String getClusterName() {
if (this.channelName != null) {
return this.channelName;
}
if (this.cacheManager.isNamed()) {
return this.cacheManager.getName();
}
return "EH_CACHE";
}
I'll get a patch put together for that change.
Comment by Eric Dalquist [ 11/Oct/10 ]
Adds a channel_name property to the JGroups peer provider configuration that explicitly sets the JChannel name
Comment by Juan G [ 13/Oct/10 ]
Hi Eric. Thanks for your quick response.
I want to load two different CacheManager from two different XML, but share the Threads that are created to co
the other cluster instances (I thought this could be accomplished with the same channel name).
My config is like this:
<cacheManagerPeerProviderFactory
class="net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory"
properties="connect=UDP(singleton_name=UDP_SINGLE;mcast_addr=238.255.0.3;mcast_port=45566;
mcast_send_buf_size=150000;mcast_recv_buf_size=80000)
PING(timeout=2000;num_initial_members=6):
MERGE2(min_interval=5000;max_interval=10000):
FD_SOCK:VERIFY_SUSPECT(timeout=1500):
pbcast.NAKACK(gc_lag=10;retransmit_timeout=3000):
UNICAST(timeout=5000):
pbcast.STABLE(desired_avg_gossip=20000):
FRAG:
pbcast.GMS(join_timeout=5000;join_retry_timeout=2000;shun=false;print_local_addr=true)"
propertySeparator="::"
/>
As you can see above, with "singleton_name" property I can get only one shared transport for all ehcaches (since
Unfortunately, this doesn't work well if I don't set a different "name" attribute for each <ehcache> (it throws an e
loading JGroupsBootStrapCacheLoader).
Please can you send me an example of "channel_name" in config XML? I suppose this change in 1.4 SNAPSHO
this URL
http://oss.sonatype.org/content/repositories/sourceforge-snapshots/net/sf/ehcache/ehcache-jgroupsreplication/1.4
until tomorrow, isn't it?
Thanks very much
Comment by Eric Dalquist [ 13/Oct/10 ]
It will depend on when Greg gets the patch applied, I don't actually have commit access to ehcache.
From my level of familiarity with how the EhCache peer providers work I don't think what you want to do is eas
The current JGroups code is completely written assuming that is is working within a single CacheManager. To s
JChannel between different CacheManagers the cache manager name would need to be added to the message ob
over the channel. Also there is no way without using something like a static field to share the JChannel object be
managers and the problem there is how do you decide when you shutdown the JChannel? You can't simply bind
lifecycle of the corresponding CacheManager like it is right now.
My only thought here for a potential way to do this (and it would require some significant refactoring in the JGro
integration) would be to use something like Spring to manage the JChannel externally to the JGroups peer provid
ThreadLocal or some other static like lookup mechanism. Again the JGroups integration code would need to be
handle events from multiple cache managers on a single JChannel.
Comment by Juan G [ 13/Oct/10 ]
Hi! I think I have a working configuration to share a transport between different caches!
Environment:
ehCache 2.2.1 SNAPSHOT
ehcache-jgroupsreplication 1.4 SNAPSHOT
jGroups 2.10GA.
First we initialize the first cache (new CacheManager(URL)), with this config:
<?xml version="1.0" encoding="UTF-8"?>
<ehcache name="CACHE1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="../config/ehcache.xsd">
<defaultCache
name="defaut"
maxElementsInMemory="5"
eternal="false"
timeToIdleSeconds="20"
timeToLiveSeconds="20"
overflowToDisk="false"
diskPersistent="false"
memoryStoreEvictionPolicy="LRU"
/>
<cacheManagerPeerProviderFactory
class="net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory
properties="connect=UDP(singleton_name=UDP_SINGLE;mcast_addr=231.12.21.132;mcast_port=4556
mcast_send_buf_size=150000;mcast_recv_buf_size=80000):
PING(timeout=2000;num_initial_members=6):
MERGE2(min_interval=5000;max_interval=10000):
FD_SOCK:VERIFY_SUSPECT(timeout=1500):
pbcast.NAKACK(gc_lag=10;retransmit_timeout=3000):
UNICAST(timeout=5000):
pbcast.STABLE(desired_avg_gossip=20000):
FRAG:
pbcast.GMS(join_timeout=5000;join_retry_timeout=2000;shun=false;print_local_addr=
propertySeparator="::"
/>
<cache name="cacheApp1"
maxElementsInMemory="1000"
eternal="false"
timeToIdleSeconds="1000"
timeToLiveSeconds="1000"
overflowToDisk="false">
<cacheEventListenerFactory
class="net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory"
properties="replicateAsynchronously=true, replicatePuts=true,
replicateUpdates=true, replicateUpdatesViaCopy=false,
replicateRemovals=true" />
<bootstrapCacheLoaderFactory
class="net.sf.ehcache.distribution.jgroups.JGroupsBootstrapCacheLoaderFactory"/>
</cache>
</ehcache>
Next we initialize a second cache manager with this config:
<?xml version="1.0" encoding="UTF-8"?>
<ehcache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="../config/ehcache.xsd">
<defaultCache
name="defaut"
maxElementsInMemory="5"
eternal="false"
timeToIdleSeconds="20"
timeToLiveSeconds="20"
overflowToDisk="false"
diskPersistent="false"
memoryStoreEvictionPolicy="LRU"
/>
<cacheManagerPeerProviderFactory
class="net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory
properties="connect=UDP(singleton_name=UDP_SINGLE)"
propertySeparator="::"
/>
<cache name="cacheApp2"
maxElementsInMemory="1000"
eternal="false"
timeToIdleSeconds="1000"
timeToLiveSeconds="1000"
overflowToDisk="false">
<cacheEventListenerFactory
class="net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory"
properties="replicateAsynchronously=true, replicatePuts=true,
replicateUpdates=true, replicateUpdatesViaCopy=false,
replicateRemovals=true" />
<bootstrapCacheLoaderFactory
class="net.sf.ehcache.distribution.jgroups.JGroupsBootstrapCacheLoaderFactory"/>
</cache>
</ehcache>
After that you'll something like this in logs:
-----------------------------------GMS: address=xxxxxxxxx, cluster=CACHE1, physical address=xxxxxxxxx:yyyyy
-----------------------------------
So we have a single thread for two ehcaches instances. The first ehcache must have the "name" attribute in <ehc
the singleton_name and UDP well configured.The second instances and beyond must have the "singleton_name"
to tell JGroups which transport we want to use.
Please tell me if what I am saying makes sense.
Thanks very much!
Comment by gluck [ 13/Oct/10 ]
Patch committed.
Comment by gluck [ 13/Oct/10 ]
Guys, applied Eric's patch. To remain consistent with Ehcache's configuration conventions, I renamed channel_n
channelName.
Generated at Wed May 03 14:35:24 PDT 2017 using JIRA 6.2.4#6261sha1:4d2e6f6f26064845673c8e7ffe9b6b84b45a6e79.