dimanche 23 octobre 2011
How to install GXT (or any other binary distribution) into Nexus (or any other maven repository)
I needed to upload a 3rd party jar to Nexus. This is pretty easy when you use the admin UI, but I couldn't find a way to upload the other artifacts (e.g. the sources and javadoc jars). Here's how to do it...
mvn deploy:deploy-file
The key is to use maven deploy plugin from the command line. It has a special goal 'deploy-file' that allows you to upload 3rd party binary jars.
Publishing JARs to maven
Since the maintainers of GXT don't really do maven (and the central maven repository hasn't been updated to version 1.2.3 yet), I decided to put the latest release of GXT into our own Nexus repository. After downloading the latest release, here's how I pushed it to our repository: First I uploaded the binary jar:
mvn deploy:deploy-file -DgroupId=com.extjs \ -DartifactId=gxt \ -Dpackaging=jar \ -Dversion=1.2.3 \ -Dfile=gxt.jar \ -Durl=http://repo/nexus/content/repositories/releases \ -DrepositoryId=IdInM2settings
Then I uploaded it again as the sources jar:
mvn deploy:deploy-file -DgroupId=com.extjs \ -DartifactId=gxt \ -Dpackaging=jar \ -Dversion=1.2.3 \ -Dfile=gxt.jar \ -Durl=http://repo/nexus/content/repositories/releases \ -DrepositoryId=IdInM2settings \ -Dclassifier=sources
Note: I upload the sources jar so all of the IDE features work (helpful code sense, debugging, etc).
Upload permissions
You need to make sure that your ~/.m2/settings.xml file has the username and password that isallowed to add resources to your repository:
<settings> <servers> <server> <id>IdInM2settings</id> <username>admin</username> <password>password</password> </server> </servers> ... </settings>
Notes
Here's the upload url that is created from the arguments passed to 'deploy-file':
${url}/${groupId}/${artifactId}/${artifact}-${version}-${classifier}.${packaging}
It replaces . with / in the ${groupId}.
It will create a pom.xml file for you if one has not already been created. The second upload will update the pom.
Java Memory Puzzle
The Puzzle
The following code throws and OutOfMemoryError when you run it:
public class JavaMemoryPuzzle { private final int dataSize = (int) (Runtime.getRuntime().maxMemory() * 0.6); public void f() { { byte[] data = new byte[dataSize]; } byte[] data2 = new byte[dataSize]; } public static void main(String[] args) { JavaMemoryPuzzle jmp = new JavaMemoryPuzzle(); jmp.f(); } }
This code does not throw an OutOfMemoryError:
public class JavaMemoryPuzzlePolite { private final int dataSize = (int) (Runtime.getRuntime().maxMemory() * 0.6); public void f() { { byte[] data = new byte[dataSize]; } for(int i=0; i<10; i++) { System.out.println("Please be so kind and release memory"); } byte[] data2 = new byte[dataSize]; } public static void main(String[] args) { JavaMemoryPuzzlePolite jmp = new JavaMemoryPuzzlePolite(); jmp.f(); System.out.println("No OutOfMemoryError"); } }
The question is why?
Investigation (narrowing the problem)
So the only significant difference between the two classes is the use of a for loop in the code that doesn't OOM. The method f() is where I focused my attention. The method allocates two byte arrays each using 60% of the maximum amount of memory; if garbage collection doesn't kick in for data then an OOM error will happen. It looks like the narrowed puzzle is: why is the byte[] data not garbage collected after it falls out of scope?
My understanding of this (prior to this exercise) was that local variables were stored on the stack and popped off at the end of the method call, so nulling them inside a method was a waste of time. I hadn't considered the more fundamental scoping question though. When does the garbage collector remove references to local variables that are out of scope?
Ok now it was time to do some searching... the problem was that I didn't even know what to search for. After doing a bunch of searches about garbage collection I found this Appendix in a java garbage collection performance guide that seemed to answer the question.
Quote:
an efficient implementation of the JVM is unlikely to zero the reference when it goes out of scope... Because invisible objects can't be collected... you might have to explicitly null your references to enable garbage collection.There were still some things bothering me though:
- How does the for loop prevent the OOM? Is it enabling garbage collection?
- The article is old: "Unless otherwise noted, all performance measurements described in this book were run on a pre-release build of the Java 2 Standard Edition (J2SE) v. 1.3 using the HotSpot Client VM on the Microsoft Windows operating system." IMHO, the VM has changed a lot since the pre-release 1.3 especially with regards to performance and garbage collection.
Investigation (Garbage Collection)
What I needed to know was what my more modern VM (1.5.0_16 on OSX) was really doing when both sets of code were executed. Here is the two executions with verbose GC enabled:
$ java -verbosegc JavaMemoryPuzzle [GC 423K->156K(1984K), 0.0018041 secs] [Full GC 156K->156K(1984K), 0.0136870 secs] [GC 39219K->39209K(41040K), 0.0005973 secs] [Full GC 39209K->39209K(41040K), 0.0085698 secs] [Full GC 39209K->39201K(65088K), 0.0362276 secs] Exception in thread "main" java.lang.OutOfMemoryError: Java heap space at JavaMemoryPuzzle.f(JavaMemoryPuzzle.java:12) at JavaMemoryPuzzle.main(JavaMemoryPuzzle.java:17) $ java -verbosegc JavaMemoryPuzzlePolite [GC 423K->156K(1984K), 0.0017339 secs] [Full GC 156K->156K(1984K), 0.0107578 secs] Please be so kind and release memory Please be so kind and release memory Please be so kind and release memory Please be so kind and release memory Please be so kind and release memory Please be so kind and release memory Please be so kind and release memory Please be so kind and release memory Please be so kind and release memory Please be so kind and release memory [GC 39227K->39209K(41040K), 0.0006873 secs] [Full GC 39209K->156K(41040K), 0.0082828 secs]
So this shows us the problem... the last Full GC is able to reclaim memory in the Polite class but is not able to in the non-Polite class. It still doesn't help us determine why.
Investigation (Bytecode)
The only thing I could think of was to somehow look at the bytecode that was being generated by the compiler. The tool to do this is javap. I opened a command prompt and ran it against the two classes. There is more stuff that comes out, but I trimmed the result to just the f() method as that is the important method:
$ javap -c JavaMemoryPuzzle ... public void f(); Code: 0: aload_0 1: getfield #6; 4: newarray byte 6: astore_1 7: aload_0 8: getfield #6; 11: newarray byte 13: astore_1 14: return ...
It looks like the #6 and #13 the same index in the local variable array of the current frame to house the newarray created in #4 and #11. I'm not sure what this means but I think I was expecting two different indices.
Then I ran it again for the polite version:
$ javap -c JavaMemoryPuzzlePolite ... public void f(); Code: 0: aload_0 1: getfield #6; 4: newarray byte 6: astore_1 7: iconst_0 8: istore_1 9: iload_1 10: bipush 10 12: if_icmpge 29 15: getstatic #7; 18: ldc #8; 20: invokevirtual #9; 23: iinc 1, 1 26: goto 9 29: aload_0 30: getfield #6; 33: newarray byte 35: astore_1 36: return ...
Again it looks like the same thing (#6 and #35) but the difference is that there is storage instructionas a part of the initial creation of the for loop in line #8. Maybe it is this store instruction that is the key.
Note: I really had no idea what any of these commands meant but I consulted the VM Spec and looked at the instruction set.
Assignment or Looping?
I rewrote the Polite class's f() method as follows to remove the assignment but keep the loop by moving i to an instance variable:
int i = 0; public void f() { { byte[] data = new byte[dataSize]; } for (; i < 3; i++) { System.out.println("whatever"); } byte[] data = new byte[dataSize]; }
When I run this I get an OOM. The bytecode looks like this:
public void f(); Code: 0: aload_0 1: getfield #6; 4: newarray byte 6: astore_1 7: aload_0 8: getfield #7; 11: iconst_3 12: if_icmpge 36 15: getstatic #8; 18: ldc #9; 20: invokevirtual #10; 23: aload_0 24: dup 25: getfield #7; 28: iconst_1 29: iadd 30: putfield #7; 33: goto 7 36: aload_0 37: getfield #6; 40: newarray byte 42: astore_1 43: return
Notice there is no store instruction between #6 and #42. Now I rewrite one more time just to have simple assignment (no loop) in the f() method:
public void f() { { byte[] data = new byte[dataSize]; } int i = 1; byte[] data = new byte[dataSize]; }
When I run this I get no OOM. The bytecode looks like this:
public void f(); Code: 0: aload_0 1: getfield #6; 4: newarray byte 6: astore_1 7: iconst_1 8: istore_1 9: aload_0 10: getfield #6; 13: newarray byte 15: astore_2 16: return
Notice the store instruction at #8 as well as #15 seems to use a different index.
Conclusion
I think the answer to the Java Memory Puzzle possibly depends upon the version of the JVM on which you are running. The OOM error occurs because the garbage collector has problems reclaiming memory allocated for invisible objects. It appears that the act of assignment from within the same method (and stack frame) but after an invisible object is out of scope, allows the garbage collector to reclaim the memory allocated for that invisible object. However, I don't understand why the garbage collector behaves this way.
Of course this code is a contrived example, I have found in practice that the two assignments in the example would probably happen in different methods and thus not run into this problem (each method gets its own stack frame). You certainly don't need to assign a variable to null if that is the last thing that happens before the method ends.
It is true that in very rare cases you may need to assign a local variable to null in order to let the garbage collector reclaim the memory assigned to that object. I can think of a couple of edge scenarioswhere this might be the case: A large data to small data scenario
public SmallData f() { BigData big = dataFetcher.getBigData(); SmallData small = big.getSmallSubSet(); big = null; //big can be collected small.doWork(); //creates a bunch of objects return small.readyForReturn(); }
or some kind of pool-like thing:
public void f() { { byte[] data = new byte[dataSize]; data = null; //can be collected } while (true) { //do work forever } }
Row level security using Spring and Hibernate
Recently I was tasked with adding data security to an existing application. The security rules were complex, users within the same user role were supposed to be able to view and edit data differently. I decided to implement this task with a permissions based system whereby users were grouped into roles and each role had a certain set of permissions. Here's how I implemented the solution using Spring and Hibernate...
Specifications
A little background on how the client thought about application security: USERs in the system could belong to one or more ROLEs. Each ROLE had a set of data level VIEW permissions and a potentially different set of EDIT permissions (the EDIT permissions were always a subset of the VIEW permissions).
At first, the application was developed using a static set of ROLEs, with each ROLE having a set of data filters defined. The problem with that strategy, was that every time a new ROLE was added, a new set of data filters needed to be defined (and coded). The client wouldn't accept this solution because they didn't want to have to go through an entire code, test, deploy cycle everytime they decided to define a new ROLE. Furthermore, the ROLEs were kind of in flux and we didn't know when they would stabilize.
So we proposed a permissions based model and tied the application to a fixed set of permissions. Once the application was bound to permissions only, it was decoupled from USER and ROLE assignments. Ultimately, we ended up tying granted permissions to ROLEs for ease of management, even though the application could care less about the ROLE. Now only a change to the permissions set would require a full code cycle. This was pretty low risk because the business processes being handled by this application were quite mature, the permission set was already very stable.
Architectural Goals
- Make the DAO class implementations unaware of the filters.
- The filters should be externally configurable.
- The filters should be tied to DAO methods.
Possible Implementation Layers
After deciding upon the permissions based system, I did a little bit of research and didn't find much on the topic. Really there are two common spots used to implement row level security: the database (e.g. Oracle Virtual Private Database) or in the application tier. VPD looks pretty cool but we weren't using Oracle, so no go on that one. We could have still implemented this data security in the database as a stored procedure/function or in the application layer.
We chose the application layer because of the number of possible permutations for the filters. We had a bunch of orthogonal permissions each with their own data condition. Trying to code nasty permutations in straight procedural code is messy and error prone. The best fit for this was something that could dynamically build filters without a boatload of if/else blocks. Stored procedures were out, application layer code was the answer for me. I immediately thought of the Criteria API in Hibernate as a good fit.
Application Implementation
Now that I had decided to do this at the code layer, I needed to figure out exactly how I was going to implement this solution. A couple of things came to mind:
- an aspect style, wrapper implementation
- a brute force style, inject code into the DAO methods style
The wrapper implementation seemed to be on par with my architectural goals, so I decided to research that option a little further (there really wasn't much to research on the brute force style). After reading Rick Hightower's article-- a blueprint for row level security using Hibernate filters, I had a decent blueprint with which to work.
Implementation
Okay first things first, I needed a a way to intercept certain method calls in order to generate the dynamic filters (based upon a User's granted permissions). What I wanted to emulate was how spring transactions were configured in our manager layer:
<bean id="txProxyTemplate" abstract="true" class="org.springframework.transaction.interceptor.TransactionProxyFactoryBean"> <property name="transactionManager" ref="transactionManager"/> <property name="transactionAttributes"> <props> <prop key="save*">PROPAGATION_REQUIRED</prop> <prop key="remove*">PROPAGATION_REQUIRED</prop> <prop key="*">readOnly</prop> </props> </property> </bean> <bean id="doSomeWorkManager" parent="txProxyTemplate"> <property name="target"> <bean class="com.mattfleming.service.impl.WorkeManagerImpl" autowire="byName"/> </property> </bean>
In the list above, any method starting with save or remove would require a transaction, everything else no transaction required. Out of this was born the RowLevelFilter andPermissionPrefixRowLevelFilterMethodInterceptor classes. Here are the full implementations of those classes:
RowLevelFilter
package com.mattfleming.security; import java.util.List; /** * Something which filters data at a row level. */ public interface RowLevelFilter { /** * Do everything necessary to apply the filter here. If you need other things * to accomplish this task, make sure to set up the spring config files so that * your implementation has access to these other resources. * * @param keys a List of keys to be passed to the filter * @param arguments arguments passed to the filtered method */ void prepare(List<String> keys, Object[] arguments); }
PermissionPrefixRowLevelFilterMethodInterceptor
package com.mattfleming.security; import org.aopalliance.intercept.MethodInterceptor; import org.aopalliance.intercept.MethodInvocation; import org.springframework.util.PatternMatchUtils; import org.springframework.util.StringUtils; import java.lang.reflect.Method; import java.util.*; /** * This class allows you to wrap a bean so that a RowLevelFilter will be invoked if * the method name matches the bean configuration file. The easiest way to use * this bean is by configuring it as an abstract parent. For example, * * <bean id="rowLevelSecurityProxyTemplate" abstract="true" * class="com.mattfleming.security.RowLevelFilterProxyFactoryBean"> * <property name="methodToPermissionPrefix"> * <props> * <prop key="save*">EDIT</prop> * <prop key="remove*">EDIT</prop> * <prop key="*">VIEW</prop> * </props> * </property> * </bean> * <bean id="workDao" parent="rowLevelSecurityProxyTemplate"> * <property name="target"> * <bean class="com.mattfleming.dao.hibernate.WorkDaoHibernate" autowire="byName"/> * </property> * <property name="filter"> * <bean class="com.mattfleming.dao.hibernate.WorkSecurityFilter" autowire="byName"/> * </property> * </bean> * * In the example above, all methods named save* and remove* will invoke the prepare(List<String> keys) method * on the specified filter (WorkSecurityFilter) and pass the keys specified in the * prop definition. If you want multiple keys to be passed, they should be comma separated and the keys * cannot contain spaces. The method name patterns are inforced via the PatternMatchUtils class which currently * supports the following simple pattern styles: "xxx*", "*xxx" and "*xxx*" matches, as well as direct equality. * @see org.springframework.util.PatternMatchUtils#simpleMatch(String, String) */ public class PermissionPrefixRowLevelFilterMethodInterceptor implements MethodInterceptor { private Map<String, List<String>> methodToPermissionPrefixMap = new HashMap<String, List<String>>(); private RowLevelFilter filter; public Object invoke(MethodInvocation methodInvocation) throws Throwable { //check to see if method should be intercepted List<String> permissionPrefixes = getPermissionPrefixes(methodInvocation.getMethod()); if (permissionPrefixes != null){ //apply RowLevel filters filter.prepare(permissionPrefixes, methodInvocation.getArguments()); } return methodInvocation.proceed(); } public void setMethodToPermissionPrefix(Properties methodToPermissionPrefix) { for (Object o : methodToPermissionPrefix.keySet()) { String methodName = (String) o; String value = methodToPermissionPrefix.getProperty(methodName); List<String> prefixes = methodToPermissionPrefixMap.get(methodName); if (prefixes == null) { prefixes = new ArrayList<String>(); } String[] tokens = StringUtils.commaDelimitedListToStringArray(value); for (String token : tokens) { // Trim leading and trailing whitespace. token = StringUtils.trimWhitespace(token.trim()); prefixes.add(token); } methodToPermissionPrefixMap.put(methodName, prefixes); } } private List<String> getPermissionPrefixes(Method method) { // look for direct name match String methodName = method.getName(); List<String> prefixes = this.methodToPermissionPrefixMap.get(methodName); if (prefixes == null) { // Look for most specific name match. String bestNameMatch = null; Set<String> keys = this.methodToPermissionPrefixMap.keySet(); for (String mappedName : keys) { boolean matches = PatternMatchUtils.simpleMatch(mappedName, methodName); if (matches && (bestNameMatch == null || bestNameMatch.length() <= mappedName.length())) { prefixes = this.methodToPermissionPrefixMap.get(mappedName); bestNameMatch = mappedName; } } } return prefixes; } public void setFilter(RowLevelFilter filter) { this.filter = filter; } }
So every method call to a bean using the rowLevelSecurityProxyTemplate as its parent would be evaluated. If a method name matched the configuration, the appropriate filter would be invoked and (in our case) generate a dynamic hibernate Criterion but you could really do whatever you wanted to. To achieve full invisibility to the DAO layer, I would have enabled Hibernate session filters here.
RowLevelFilter Implementation
I really wanted to make use of Hibernate session filters to add row level security. If I were able to use Hibernate filters, any query using that session would have the filters applied (magically). What's great about that is that the DAO classes don't have to know about the filters at all.
But I was unable to do this because of the dynamic nature of the rule set. Hibernate filters are static predicates that take parameters. When more than one filter is defined, the conjunction of the two filters is executed (via an SQL AND). In my case, the data filters were really a bunch of disjunctions (ORclauses) with some conjunctions as well. Every disjunction in my rule set, became another permutation of a static filter-- the number of static filters gets pretty large with only a few disjunctions. What's worse, is that even if I defined all of the filters, I would still have to write code to determine which filter to enable on the session. In this case, Hibernate filters were out.
The goal of invisibility was not going to be met this time. I could still try to minimize the dependencies between the Interceptor and the DAO implementations though. Things to look for in the RowLevelFilter implementation below are how the filter gets passed down to my DAO layer and how the filter key gets turned into an application permission. The below class is not the full implementation but it should be enough for you to get the idea..
package com.mattfleming.dao.hibernate; public class WorkSecurityFilter implements RowLevelFilter { private Map<UsageIntentEnum, Criterion> passedFilters; /** * Enforce data access permissions for viewing and modification * * @param keys to indicate the user's intent * @param arguments that are passed to the method being invoked */ public void prepare(List<String> keys, Object[] arguments) { for (String key : keys) { UsageIntentEnum action = UsageIntentEnum.valueOf(key); if (action != null) { passedFilters.put(action, createCriteria(action)); } } } /** * Create the data filter necessary to view. Here is the clause's pseudocode: * WHERE ( * (createdByMe AND statusAndInvolvementPermissions) OR * (statusAndInvolvmentPermissions AND (whichResidents OR whichAgencies)) * ) * * @param action which permission set to use * @return Criteria if a filter exists or null */ private Criterion createCriteria(UirUsageIntentEnum action) { String username = ((UserDetails) SecurityContextHolder.getContext().getAuthentication().getPrincipal()).getUsername(); Disjunction ret = Restrictions.disjunction(); if (!parsePermissions(MessageFormat.format("PERM_WORK.{0}.CREATED-BY-ANYONE.ALL-STATUS.ALL-", action)).isEmpty()) { return ret; } Criterion createdByMe = createCreatedByMeCriterion(username, action); Criterion createdByOthers = createCreatedByOthersCriterion(username, action); if (createdByMe == null && createdByOthers == null) { ret = null; } else { if (createdByMe != null) { ret.add(createdByMe); } if (createdByOthers != null) { ret.add(createdByOthers); } } if (log.isDebugEnabled()) { log.debug("Row Level Data Filter created for " + username + " is: " + ret); } return ret; } public enum UsageIntentEnum { VIEW, EDIT } }
The UsageIntentEnum has the same values that are in our bean definition for the rowLevelSecurityProxyTemplate's methodToPermissionPrefix property. These same values are then embedded into our application permissions are also used as the key to a passedFilters Map. This Map is the vehicle by which the Criterion created in the filter, are passed to the DAO classes.
DAO Implementation
In the DAO implementations you make use of the passedFilters like this..
public class WorkDaoHibernate extends BaseDaoHibernate implements WorkDao { private Map<UsageIntentEnum, Criterion> passedFilters; public void setPassedFilters(Map<UsageIntentEnum, Criterion> passedFilters) { this.passedFilters = passedFilters; } private Criteria getCriteria(UsageIntentEnum intent) { Criterion restrictions = passedFilters.get(intent); if (restrictions != null) { Criteria filter = getSession().createCriteria(Work.class); filter.add(restrictions); return filter; } else { throw new AccessDeniedException("Security filters are not properly enabled."); } } }
The passedFilters are the same Map that was created in the RowLevelFilter implementation. Spring is the glue that ties this all together though; it is in the Spring configuration file where the filter is tied to both the RowLevelFilter and the DAO implementation.
Spring Configuration
Here is where we tie it all together. The xml file below:
- Defines the interceptor (MethodInterceptor implementation).
- Defines the security filter (RowLevelFilter implementation).
- Defines the DAO implementation and hooks it to the interceptor.
- Defines the means to pass the filters to the DAO implementation (passedFilters).
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:aop="http://www.springframework.org/schema/aop" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.0.xsd"> <bean id="rowLevelSecurityProxyTemplate" abstract="true" class="com.mattfleming.security.RowLevelFilterProxyFactoryBean"> <property name="methodToPermissionPrefix"> <props> <prop key="save*">EDIT</prop> <prop key="remove*">EDIT</prop> <prop key="*">VIEW, EDIT</prop> </props> </property> </bean> <bean id="workSecurityFilter" class="com.mattfleming.dao.hibernate.WorkSecurityFilter" autowire="byName"/> <bean id="workDao" parent="rowLevelSecurityProxyTemplate"> <property name="target"> <bean class="com.mattfleming.dao.hibernate.WorkDaoHibernate" autowire="byName"/> </property> <property name="filter" ref="workSecurityFilter"/> </bean> <bean id="passedFilters" class="java.util.HashMap" scope="request"> <aop:scoped-proxy/> </bean> </beans>
Since the filters are specific to the user making the request, we wouldn't want to have multiple requests sharing the same filters. So how can we guarantee that the passedFilters are unique for each and every request? That's where Spring helps us out... Notice that passedFilters bean is defined as a request scoped bean. This means that on each request a new Map will be created. In order to use the aop tag (necessary for the request scope), you will need to use Spring's xsd in the beans declaration instead of the dtd.
Conclusion
I'm pretty happy with the way the row level filter generation turned out. The only thing that would be better would be if we didn't have to pass the filters to the DAO layer and make the DAO layer (or any layer for that matter) have any knowledge of them. The original goal was to invisibly (from the DAOs point of view) add row level security and I didn't quite get all the way there. If Hibernate started to support dynamic filters, I could achieve the goal easily. There are other ways to do this, but I like the localization of the solution; the filters are defined in one spot, configured easily, and the DAO classes are the only ones dealing with them.
Dynamic list binding in Spring MVC... what? why?
Previously, I wrote an article on how to achieve dynamic list binding in Spring MVC. Since I wrote that article, I have received emails and comments that ask (in a roundabout way) Why would I need to do this? When I wrote the other article I didn't explain the why; I just wrote the how. Here's why you need dynamic binding and the problems it is trying to solve.
Example
Let's say that you have an application that allows you to modify a Hand. As in the thing at the end of your arm. You have a two class object model: Hand and Finger. A Hand has Fingers. More specifically, a Hand has a List of Fingers. For the impatient (who don't want to read the Why section), I created an online application to demonstrate some of the issues.
Why
Here are some reasons why you need dynamic binding in general:
- Creating the object graph takes a really long time
- The object graph could have changed between subsequent page calls
- The page allows for list changes via javascript
Creating the object graph takes a really long time
Let's say that creating a Hand from its persistent storage takes a really long time. You will want to minimize the number of times you go to that persistent storage. Spring invokes the formBackingObject() method in controllers to create object graphs that will be used with web forms.
Let's say that creating a Hand from its persistent storage takes a really long time. You will want to minimize the number of times you go to that persistent storage. Spring invokes the formBackingObject() method in controllers to create object graphs that will be used with web forms.
In our example, when the Edit Hand page is being loaded, the formBackingObject() method is invoked and the page (with the form) is displayed. When that page is submitted, Spring will invoke the formBackingObject() method and then overlay the results of the form onto the object graph returned from that method invocation. So what just happened? You just invoked an expensive method twice.. once for display, once for submission. The second invocation was completely unnecessary. In order to stop the second invocation, you will have something like this in your formBackingObject() implementation:
if (!isFormSubmission(request)) {
//create expensive object graph
} else {
//create non-populated object graph shell
}
So what's the problem? Nothing with regard to the initial page load (provided it isn't from a form submit), but the submission will fail with an Exception. In our example, the List of Fingers on the Hand will be empty when the submission takes place. Spring will then try to get the first Finger out of the List (in order to overlay the values) but will fail because the List is empty.
The object graph could have changed between subsequent page calls
Now let's say that it isn't all that expensive to create a Hand from persistent storage. The double call to formBackingObject() isn't really that big of a deal from a cost (cpu, elapsed time, etc) perspective. Let's say you are on the Edit Hand page typing in your edits. Another user goes to the Edit Hand page, removes a Finger and submits the changes. Now you finish your edits and hit submit. That submit will fail. The reason is the List of Fingers. In the first call, you get a Hand with a List containing five Fingers and those five Fingers are rendered on the page. In the second call, you get a Hand with four Fingers that Spring attempts to overlay the four finger graph with the elements on the form (which has five fingers). The fifth finger will error out.
Now let's say that it isn't all that expensive to create a Hand from persistent storage. The double call to formBackingObject() isn't really that big of a deal from a cost (cpu, elapsed time, etc) perspective. Let's say you are on the Edit Hand page typing in your edits. Another user goes to the Edit Hand page, removes a Finger and submits the changes. Now you finish your edits and hit submit. That submit will fail. The reason is the List of Fingers. In the first call, you get a Hand with a List containing five Fingers and those five Fingers are rendered on the page. In the second call, you get a Hand with four Fingers that Spring attempts to overlay the four finger graph with the elements on the form (which has five fingers). The fifth finger will error out.
The page allows for list changes via javascript
On the first call, a Hand with five Fingers is returned and displayed. However, this is one of those fancy web 2.0 kind of pages... one where you can click an Add New Finger button and magically the new Finger appears without submitting the form. Once you finish adding Fingers you eventually click the submit button.. only to fail miserably. Much like the previous two reasons, the mismatch between the number of Fingers on the form and the number of Fingers in persistent storage is the problem. When Spring attempts to overlay the new set of Fingers from the form onto the graph created by formBackingObject() there is a mismatch.. the new form has let's say 12 Fingers.. while the graph from storage still only has five Fingers. When the overlaying of the sixth finger is attempted an Exception is thrown.
On the first call, a Hand with five Fingers is returned and displayed. However, this is one of those fancy web 2.0 kind of pages... one where you can click an Add New Finger button and magically the new Finger appears without submitting the form. Once you finish adding Fingers you eventually click the submit button.. only to fail miserably. Much like the previous two reasons, the mismatch between the number of Fingers on the form and the number of Fingers in persistent storage is the problem. When Spring attempts to overlay the new set of Fingers from the form onto the graph created by formBackingObject() there is a mismatch.. the new form has let's say 12 Fingers.. while the graph from storage still only has five Fingers. When the overlaying of the sixth finger is attempted an Exception is thrown.
Solution
In addition to an explanation of the how, I created an online web application to illustrate the problem and the solution. The source war is also attached to this article below.
Filename/TitleSizeDynamicFormsExample.war (http://mattfleming.com/files/active/0/DynamicFormsExample.war)3.09 MBJava Update for Mac OSX changes the default keystore password
Apple decided to change the well-known password of the default Java truststore in their latest updates. I'll file this one under a Let's change the thing and see who complains category.
If you install either:
- Java for Mac OS X 10.6 Update 1 - Java for Mac OS X 10.6 Update 1
- Java for Mac OS X 10.5 Update 6 - Java for Mac OS X 10.5 Update 6
The password for the cacerts file was changed to changeme from the usual Sun password of changeit The system cacerts file is located @/Library/Java/Home/lib/security/cacerts
I think they're going to change it back to the original though, but you have two options as a recourse:
- Switch all programs that need to access the default truststore to use changeme.
- Change the truststore password: sudo keytool -storepasswd -new changeit -keystore /Library/Java/Home/lib/security/cacerts -storepass changeme
Inscription à :
Articles (Atom)