| Managed property reloading |
| |
| Problem, managed beans only reload currently at their own level. |
| They should reload the entire dependency graph wherever needed. |
| |
| We face two problems first we can have deep dependency graphs |
| Secondly multiple beans can reference the same bean and also |
| can have long running scopes. |
| |
| We cannot for now solve all problems related to dependency graph reloading |
| but we can solve most. |
| |
| First deep graphs. |
| |
| The copy properties has to be extende, first we have to walk through the entire dependency |
| graph and in case of managed properties we have to reload the instances in case |
| of having a manged property. (Simply fetch the bean instead of |
| simply copying the old one) |
| |
| Long running beans, this is not really possible to resolve for now because |
| we do not have back references to the old beans, programmers |
| must use either a manual load for long running beans referencing others, or simply |
| live with it. |
| |
| |
| We probably in the long run can introduce a scoped proxy doing it, but then |
| we have to work over interfaces again, which is contra productive. |
| |
| Solution for now we probably have to allow a scoped proxy system within jsf |
| not sure how to solve this fully yet. |
| |
| |
| We have to deal in two stages with the problem |
| |
| first make a forward referencing reloading mechanism for the current state |
| secondly deal with parents as soon as we can determine them by |
| simply touching the corresponding parent source files, or make a full graph and then even mark |
| the precompiled parents as cascadable |
| |
| |
| |
| Annotation scanning and recompilation in an language independend manner |
| |
| Currently: |
| We do currently source scanning for annotations this can only work in java |
| and only works in a limited manner in java |
| |
| Goal: |
| Annotatons should be processed on the fly and new ones automatically picked up |
| and altered ones being reprocessed. |
| |
| Existing facilities, the class tracker which keeps track of already loaded |
| classes. |
| The MyFaces configuration object which keeps the data for the beans etc... |
| |
| Following way should be provided. If an object loading or reinitialisation |
| runs into a class not found situation or resource not declared situation, |
| track down the sources which are not processed yet and do a compilation of them |
| then process the annotation and try to reload the class for further |
| processing. |
| |
| |
| Following interception points have to be triggered. |
| |
| A the scripting weaver has to trigger the tracking code if a class cannot |
| be found. |
| Upon initial loading for beans etc... in the proxies if a null value |
| is given back we have to trigger the scanner and recompiler on this level as well. |
| |
| Extension to the tracker to deal with removed or renamed files. |
| The tracker upon running has to track if the file still exists and if not |
| has to remove the file from its list. |
| |
| Also we have to remove the facilities from the configuration in this case! |
| PENDING removal: |
| |
| |
| Following has to be done for now: |
| a) provide the annotation scanning upon class loading every time a class |
| is loaded via our facilities we have to rescan the annotations |
| this can keep track of changes in the managed properties |
| |
| b) if a class cannot be found on loader level we have to compile all source files of the requested type |
| currently not found in the tracker and then try to load it again and reparse it |
| if the class still cannot be found we can fall back into the old class not found code |
| (probably can be automatable via javac due to its nature of wildcarding) |
| |
| if a facility under a certain id cannot be found we have to do a recompile of all facilities |
| and then do a full reregistration scan of all annotations (note we also should keep |
| track of the annotation states so that we can unregister deleted ones ) |
| |
| |
| ---------------- |
| |
| idea of a better compile cycle... |
| problem currently, we do a class by class compile |
| this strategy has been proven to be problematic, too many sidedependencies are not |
| found and result in way too many classcast exceptions |
| |
| better option, since we cannot exchange the classes fully without getting exceptions |
| from time to time on existing objects we try to minimize the problem the following ways |
| |
| beans: reload the classes which need to be reloaded, full bean tree reload with probably an annotation |
| for beans which should be kept |
| |
| others full reload |
| |
| full recompilation if we have dirty classes at the beginning of the lifecycle so that |
| we get fresh compilates at the time we access the page, no more on demand reload. |
| |
| We should keep a dirty marker centrally for artefacts which have changed. |
| |
| Additional advantage of the full recompile, we can deal with annotation scans on |
| binary level instead of source level because and hence can support annotations |
| on the fly, |
| |
| We still can get classcast exceptions because we cannot reload all artefacts newly compiled |
| but we have less problems with classcasts than in our single file reloading strategy |
| |
| -------------- |
| |
| clearing of the managed bean map: |
| |
| public Object getValue(final ELContext context, final Object base, final Object property) |
| throws NullPointerException, PropertyNotFoundException, ELException |
| { |
| |
| if (base != null) |
| return null; |
| |
| if (property == null) |
| { |
| throw new PropertyNotFoundException(); |
| } |
| |
| final ExternalContext extContext = externalContext(context); |
| |
| if (extContext == null) |
| return null; |
| if (extContext.getRequestMap().containsKey(property)) |
| return null; |
| if (extContext.getSessionMap().containsKey(property)) |
| return null; |
| if (extContext.getApplicationMap().containsKey(property)) |
| return null; |
| |
| if (!(property instanceof String)) |
| return null; |
| |
| final String strProperty = (String)property; |
| |
| final ManagedBean managedBean = runtimeConfig(context).getManagedBean(strProperty); |
| Object beanInstance = null; |
| if (managedBean != null) |
| { |
| FacesContext facesContext = facesContext(context); |
| context.setPropertyResolved(true); |
| beanInstance = createManagedBean(managedBean, facesContext); |
| } |
| |
| return beanInstance; |
| |
| we have to clear the entire property from the request map, the session map and the application |
| map, once we have identified which managed beans are invalidated, so that |
| we can refresh all of them at request. |
| |
| We also drop all properties there so and do not keep anything |
| |
| Ok lets sum everything up |
| a) Strategy 1: we reload only the object affected and keep the properties intact, |
| positive: simple |
| negative: too limiting |
| |
| b) fine grained control of the reloading, we cascade over the entire object tree and invalidate the objects |
| touched by the dependency resolution while trying to keep the properties intact |
| positive: probably the best control over everything which has to be reloaded |
| negative: very complicated and slowest |
| |
| c) we drop all managed beans we have in the system only the ones we mark as not droppable are kept |
| (login handlers for testing etc...) |
| positive: very simple to implement by simply nulling all managed bean entries in the session application and request map |
| and then simply assigning the new managed beans to the objects kept |
| negative, way too much data loss in this case, we cannot keep the properties |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |