Search Results

Search found 531 results on 22 pages for 'datastore'.

Page 21/22 | < Previous Page | 17 18 19 20 21 22  | Next Page >

  • Setting the type of a field in a superclass from a subclass (Java)

    - by Ibolit
    Hi. I am writing a project on Google App Engine, within it I have a number of abstract classes that I hope I will be able to use in my future projects, and a number of concrete classes inheriting from them. Among other abstract classes I have an abstract servlet that does user management, and I hava an abstract user. The AbstractUser has all the necessary fields and methods for storing it in the datastore and telling whether the user is registered with my service or not. It does not implement any project specific functionality. The abstract servlet that manages users, refers only to the methods declared in the AbstractUser class, which allows it to generate links for logging in, logging out and registering (for unregistered users). In order to implement the project-specific user functionality I need to subclass the Abstract user. The servlets I use in my project are all indirect descendants from that abstract user management servlet, and the user is a protected field in it, so the descendant servlets can use it as their own field. However, whenever i want to access any project specific method of the concrete user, i need to cast it to that type. i.e. (abstract user managing servlet) ... AbstractUser user = getUser(); ... abstract protected AbstractUser getUser(); (project-specific abstract servlet) @Override protected AbstractUser getUser() { return MyUserFactory.getUser(); } any other project specific servlet: int a = ((ConcreteUser) user).getA(); Well, what i'd like to do is to somehow make the type of “user” in the superclass depend on something in the project-specific abstract class. Is it at all possible? And i don't want to move all the user-management stuff into a project-specific layer, for i would like to have it for my future projects already written :) Thank you for your help.

    Read the article

  • Java: How ArrayList memory management

    - by cka3o4nik
    In my Data Structures class we have studies the Java ArrayList class, and how it grows the underlying array when a user adds more elements. That is understood. However, I cannot figure out how exactly this class frees up memory when lots of elements are removed from the list. Looking at the source, there are three methods that remove elements: [code] public E remove(int index) { RangeCheck(index); modCount++; E oldValue = (E) elementData[index]; int numMoved = size - index - 1; if (numMoved 0) System.arraycopy(elementData, index+1, elementData, index, numMoved); elementData[--size] = null; // Let gc do its work return oldValue; } public boolean remove(Object o) { if (o == null) { for (int index = 0; index < size; index++) if (elementData[index] == null) { fastRemove(index); return true; } } else { for (int index = 0; index < size; index++) if (o.equals(elementData[index])) { fastRemove(index); return true; } } return false; } private void fastRemove(int index) { modCount++; int numMoved = size - index - 1; if (numMoved 0) System.arraycopy(elementData, index+1, elementData, index, numMoved); elementData[--size] = null; // Let gc do its work } {/code] None of them reduce the datastore array. I even started questioning if memory free up ever happens, but empirical tests show that it does. So there must be some other way it is done, but where and how? I checked the parent classes as well with no success.

    Read the article

  • what happens when you stop VS debugger?

    - by mare
    If I have a line like this ContentRepository.Update(existing); that goes into datastore repository to update some object and I have a try..catch block in this Update function like this: string file = XmlProvider.DataStorePhysicalPath + o.GetType().Name + Path.DirectorySeparatorChar + o.Slug + ".xml"; DataContractSerializer dcs = new DataContractSerializer(typeof (BaseContentObject)); using ( XmlDictionaryWriter myWriter = XmlDictionaryWriter.CreateTextWriter(new FileStream(file, FileMode.Truncate, FileAccess.Write), Encoding.UTF8)) { try { dcs.WriteObject(myWriter, o); myWriter.Close(); } catch (Exception) { // if anything goes wrong, delete the created file if (File.Exists(file)) File.Delete(file); if(myWriter.WriteState!=WriteState.Closed) myWriter.Close(); } } then why would Visual Studio go on with calling Update() if I click "Stop" in debugging session on the above line? For instance, I came to that line by going line by line pressing F10 and now I'm on that line which is colored yellow and I press Stop. Apparently what happens is, VS goes to execute the Update() method and somehow figures out something gone wrong and goes into "catch" and deletes the file, which is wrong, because I want my catch to work when there is a true exception not when I debug a program and force to stop it.

    Read the article

  • Doing a global count of an object type (like Users), best practice?

    - by user246114
    Hi, I know keeping global counters is frowned upon in app engine. I am interested in getting some stats though, like once every 24 hours. For example, I'd like to count the number of User objects in the system once every 24 hours. So how do we do this? Do we simply keep a set of admin tool functions which do something like: SELECT FROM com.me.project.server.User; and just see what the size of the returned List is? This is kind of a bummer because the datastore would have to deserialize every single User instance to create the returned list, right? I could optimize this possibly by asking for only the keys to be returned, so the whole User object doesn't have to be deserialized. Then again, a global counter for # of users probably would create too much contention, because there probably won't be hundreds of signups a minute for the service I'm creating. How should we go about doing this? Getting my total number of users once a day is probably a pretty typical operation? Thank you

    Read the article

  • ODI 11g – Insight to the SDK

    - by David Allan
    This post is a useful index into the ODI SDK that cross references the type names from the user interface with the SDK class and also the finder for how to get a handle on the object or objects. The volume of content in the SDK might seem a little ominous, there is a lot there, but there is a general pattern to the SDK that I will describe here. Also I will illustrate some basic CRUD operations so you can see how the SDK usage pattern works. The examples are written in groovy, you can simply run from the groovy console in ODI 11.1.1.6. Entry to the Platform   Object Finder SDK odiInstance odiInstance (groovy variable for console) OdiInstance Topology Objects Object Finder SDK Technology IOdiTechnologyFinder OdiTechnology Context IOdiContextFinder OdiContext Logical Schema IOdiLogicalSchemaFinder OdiLogicalSchema Data Server IOdiDataServerFinder OdiDataServer Physical Schema IOdiPhysicalSchemaFinder OdiPhysicalSchema Logical Schema to Physical Mapping IOdiContextualSchemaMappingFinder OdiContextualSchemaMapping Logical Agent IOdiLogicalAgentFinder OdiLogicalAgent Physical Agent IOdiPhysicalAgentFinder OdiPhysicalAgent Logical Agent to Physical Mapping IOdiContextualAgentMappingFinder OdiContextualAgentMapping Master Repository IOdiMasterRepositoryInfoFinder OdiMasterRepositoryInfo Work Repository IOdiWorkRepositoryInfoFinder OdiWorkRepositoryInfo Project Objects Object Finder SDK Project IOdiProjectFinder OdiProject Folder IOdiFolderFinder OdiFolder Interface IOdiInterfaceFinder OdiInterface Package IOdiPackageFinder OdiPackage Procedure IOdiUserProcedureFinder OdiUserProcedure User Function IOdiUserFunctionFinder OdiUserFunction Variable IOdiVariableFinder OdiVariable Sequence IOdiSequenceFinder OdiSequence KM IOdiKMFinder OdiKM Load Plans and Scenarios   Object Finder SDK Load Plan IOdiLoadPlanFinder OdiLoadPlan Load Plan and Scenario Folder IOdiScenarioFolderFinder OdiScenarioFolder Model Objects Object Finder SDK Model IOdiModelFinder OdiModel Sub Model IOdiSubModel OdiSubModel DataStore IOdiDataStoreFinder OdiDataStore Column IOdiColumnFinder OdiColumn Key IOdiKeyFinder OdiKey Condition IOdiConditionFinder OdiCondition Operator Objects   Object Finder SDK Session Folder IOdiSessionFolderFinder OdiSessionFolder Session IOdiSessionFinder OdiSession Schedule OdiSchedule How to Create an Object? Here is a simple example to create a project, it uses IOdiEntityManager.persist to persist the object. import oracle.odi.domain.project.OdiProject; import oracle.odi.core.persistence.transaction.support.DefaultTransactionDefinition; txnDef = new DefaultTransactionDefinition(); tm = odiInstance.getTransactionManager() txnStatus = tm.getTransaction(txnDef) project = new OdiProject("Project For Demo", "PROJECT_DEMO") odiInstance.getTransactionalEntityManager().persist(project) tm.commit(txnStatus) How to Update an Object? This update example uses the methods on the OdiProject object to change the project’s name that was created above, it is then persisted. import oracle.odi.domain.project.OdiProject; import oracle.odi.domain.project.finder.IOdiProjectFinder; import oracle.odi.core.persistence.transaction.support.DefaultTransactionDefinition; txnDef = new DefaultTransactionDefinition(); tm = odiInstance.getTransactionManager() txnStatus = tm.getTransaction(txnDef) prjFinder = (IOdiProjectFinder)odiInstance.getTransactionalEntityManager().getFinder(OdiProject.class); project = prjFinder.findByCode("PROJECT_DEMO"); project.setName("A Demo Project"); odiInstance.getTransactionalEntityManager().persist(project) tm.commit(txnStatus) How to Delete an Object? Here is a simple example to delete all of the sessions, it uses IOdiEntityManager.remove to delete the object. import oracle.odi.domain.runtime.session.finder.IOdiSessionFinder; import oracle.odi.domain.runtime.session.OdiSession; import oracle.odi.core.persistence.transaction.support.DefaultTransactionDefinition; txnDef = new DefaultTransactionDefinition(); tm = odiInstance.getTransactionManager() txnStatus = tm.getTransaction(txnDef) sessFinder = (IOdiSessionFinder)odiInstance.getTransactionalEntityManager().getFinder(OdiSession.class); sessc = sessFinder.findAll(); sessItr = sessc.iterator() while (sessItr.hasNext()) {   sess = (OdiSession) sessItr.next()   odiInstance.getTransactionalEntityManager().remove(sess) } tm.commit(txnStatus) This isn't an all encompassing summary of the SDK, but covers a lot of the content to give you a good handle on the objects and how they work. For details of how specific complex objects are created via the SDK, its best to look at postings such as the interface builder posting here. Have fun, happy coding!

    Read the article

  • Silverlight Cream for June 01, 2010 -- #874

    - by Dave Campbell
    In this Issue: Michael Washington, Alan Beasley and Michael Washington, Miroslav Miroslavov, Max Paulousky, Teresa and Ronald Burger, Laurent Duveau, Tim Heuer, Jeff Brand, Mike Snow, and John Papa. Shoutouts: To pay homage to the Advanced Options button in Expression Blend, Adam Kinney posted: Expression Blend Advanced Options square wallpaper SilverLaw stood his drag and drop ripple on it's head for this one: Silver Soccer - A Case Study for the Flexible Surface Effect (Silverlight 4) From SilverlightCream.com: Expression Blend DataStore - A Powerful Tool For Designers Michael Washington dug into the documentation and with some Microsoft assistance has figured out how to use the SetDataStoreAction in SketchFlow... good tutorial and a game to demonstrate it's use. Windows Phone 7 View Model Style Video Player Alan Beasley and Michael Washington teamed up again to produce a ViewModel-Style Video Player for WP7 ... very nice interface I might add... very detailed tutorial and all the code... oh, and did you notice it uses MVVMLight... on WP7? ... just thought I'd mention that :) Navigation in 3D world of 2D objects In part 7 of the CompleteIT code explenation, Miroslav Miroslavov is discussing some of the very cool animation they did... 3D, moving camera... cool stuff! Search Engine Optimization (SEO) for Silverlight Applications. Part 2 Max Paulousky has part 2 of his Silverlight 4 and SEO series up. In part 2 he's discussing sitemaps and html content providing. He also has good links showing where to submit your sitemaps and information. Mousin’ down the PathListBox Teresa and Ronald Burger (not sure which) has a post up about the PathListBox and how they drew the path that they ended up using, and the code used to enable animation. Dynamically apply and change Theme with the Silverlight Toolkit We've all had fun playing with themes, but Laurent Duveau has an example up of letting your users change the theme at run-time. Microsoft Translator client library for Silverlight Tim Heuer has been playing with the Microsoft Translator for Silverlight and he has a "Works on My Machine" license on what he's making available .. but considering his access to resources... I'd say go for it :) Custom Per-Page Transitions in Windows Phone 7 Jeff Brand has a follow-on to his other WP7 post about page transitions and is now discussing per-page transitions Silverlight Tip of the Day #26 – Changing the Startup Class Mike Snow's latest 'tip' is a little more involved than a tip ... changing the startup class and actually removing (in his example), the page and app classes... code and xaml! I've seen this before but never explained as clean... fun stuff. Behaviors in Blend 4 (Silverlight TV #30) Episode 30 of Silverlight TV (now a tag at Silverlight Cream) finds John Papa talking to Adam Kinney about Behaviors in Blend 4... not only using them but creating a custom one. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Indexing data from multiple tables with Oracle Text

    - by Roger Ford
    It's well known that Oracle Text indexes perform best when all the data to be indexed is combined into a single index. The query select * from mytable where contains (title, 'dog') 0 or contains (body, 'cat') 0 will tend to perform much worse than select * from mytable where contains (text, 'dog WITHIN title OR cat WITHIN body') 0 For this reason, Oracle Text provides the MULTI_COLUMN_DATASTORE which will combine data from multiple columns into a single index. Effectively, it constructs a "virtual document" at indexing time, which might look something like: <title>the big dog</title> <body>the ginger cat smiles</body> This virtual document can be indexed using either AUTO_SECTION_GROUP, or by explicitly defining sections for title and body, allowing the query as expressed above. Note that we've used a column called "text" - this might have been a dummy column added to the table simply to allow us to create an index on it - or we could created the index on either of the "real" columns - title or body. It should be noted that MULTI_COLUMN_DATASTORE doesn't automatically handle updates to columns used by it - if you create the index on the column text, but specify that columns title and body are to be indexed, you will need to arrange triggers such that the text column is updated whenever title or body are altered. That works fine for single tables. But what if we actually want to combine data from multiple tables? In that case there are two approaches which work well: Create a real table which contains a summary of the information, and create the index on that using the MULTI_COLUMN_DATASTORE. This is simple, and effective, but it does use a lot of disk space as the information to be indexed has to be duplicated. Create our own "virtual" documents using the USER_DATASTORE. The user datastore allows us to specify a PL/SQL procedure which will be used to fetch the data to be indexed, returned in a CLOB, or occasionally in a BLOB or VARCHAR2. This PL/SQL procedure is called once for each row in the table to be indexed, and is passed the ROWID value of the current row being indexed. The actual contents of the procedure is entirely up to the owner, but it is normal to fetch data from one or more columns from database tables. In both cases, we still need to take care of updates - making sure that we have all the triggers necessary to update the indexed column (and, in case 1, the summary table) whenever any of the data to be indexed gets changed. I've written full examples of both these techniques, as SQL scripts to be run in the SQL*Plus tool. You will need to run them as a user who has CTXAPP role and CREATE DIRECTORY privilege. Part of the data to be indexed is a Microsoft Word file called "1.doc". You should create this file in Word, preferably containing the single line of text: "test document". This file can be saved anywhere, but the SQL scripts need to be changed so that the "create or replace directory" command refers to the right location. In the example, I've used C:\doc. multi_table_indexing_1.sql : creates a summary table containing all the data, and uses multi_column_datastore Download link / View in browser multi_table_indexing_2.sql : creates "virtual" documents using a procedure as a user_datastore Download link / View in browser

    Read the article

  • ESXi 5.1 ghettoVCB stuck at Clone: 10% done

    - by stormdrain
    Trying to run ghettoVCB for the first time here. I am using a NAS that is set up as a datastore on the host. I did a dry run and it completed without error. The VM is ~500GB and there is only one on the host that I'm trying to backup. I proceeded to start the actual backup: ./ghettoVCB.sh -m vmname -g ghettoVCB.conf It goes though the config and looks like it's taking off: 2013-10-24 11:43:19 -- info: CONFIG - USING GLOBAL GHETTOVCB CONFIGURATION FILE = ghettoVCB.conf 2013-10-24 11:43:19 -- info: CONFIG - VERSION = 2013_01_11_0 2013-10-24 11:43:19 -- info: CONFIG - GHETTOVCB_PID = 17398616 2013-10-24 11:43:19 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/nas2tb-001/esxi4 2013-10-24 11:43:19 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 3 2013-10-24 11:43:19 -- info: CONFIG - VM_BACKUP_DIR_NAMING_CONVENTION = 2013-10-24_11-43-18 2013-10-24 11:43:19 -- info: CONFIG - DISK_BACKUP_FORMAT = thin 2013-10-24 11:43:19 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0 2013-10-24 11:43:19 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0 2013-10-24 11:43:19 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 4 2013-10-24 11:43:19 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5 2013-10-24 11:43:19 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15 2013-10-24 11:43:19 -- info: CONFIG - LOG_LEVEL = info 2013-10-24 11:43:19 -- info: CONFIG - BACKUP_LOG_OUTPUT = /tmp/ghettoVCB-2013-10-24_11-43-18-17398616.log 2013-10-24 11:43:19 -- info: CONFIG - ENABLE_COMPRESSION = 0 2013-10-24 11:43:19 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0 2013-10-24 11:43:19 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0 2013-10-24 11:43:19 -- info: CONFIG - ALLOW_VMS_WITH_SNAPSHOTS_TO_BE_BACKEDUP = 0 2013-10-24 11:43:19 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all 2013-10-24 11:43:19 -- info: CONFIG - VM_SHUTDOWN_ORDER = 2013-10-24 11:43:19 -- info: CONFIG - VM_STARTUP_ORDER = 2013-10-24 11:43:19 -- info: CONFIG - EMAIL_LOG = 0 2013-10-24 11:43:19 -- info: 2013-10-24 11:43:22 -- info: Initiate backup for vmname 2013-10-24 11:43:22 -- info: Creating Snapshot "ghettoVCB-snapshot-2013-10-24" for serv2 Destination disk format: VMFS thin-provisioned Cloning disk '/vmfs/volumes/esxi4-storage/vmname/vmname_1.vmdk'... Clone: 10% done. and it's been that way for over an hour now. Stuck at Clone: 10% done.. Thing is: I can see the vmdk on the NAS. And it looks like almost the whole thing is there. On the NAS it's showing ~430GB but on vSphere Client Summary is shows as 507GB. I don't see the vmdk on the NAS growing any more. The logfile mimics some of the above and is sitting at "Creating Snapshot..." and nothing else is coming in. Is the vmdk on the NAS showing all those GB because of the provisioning or something? i.e. is the size of the file not necessarily indicative of the amount of actual data that has been copied? Is there are reason it might be "Stuck" at 10%? i.e. could it really be taking this long? Any other tips? Thanks. Edit: as soon as I hit the Submit button, I glance over to see that it has incremented to 11% done. Good to know it'll be complete sometime around when the sun explodes.

    Read the article

  • ODI 12c's Mapping Designer - Combining Flow Based and Expression Based Mapping

    - by Madhu Nair
    post by David Allan ODI is renowned for its declarative designer and minimal expression based paradigm. The new ODI 12c release has extended this even further to provide an extended declarative mapping designer. The ODI 12c mapper is a fusion of ODI's new declarative designer with the familiar flow based designer while retaining ODI’s key differentiators of: Minimal expression based definition, The ability to incrementally design an interface and to extract/load data from any combination of sources, and most importantly Backed by ODI’s extensible knowledge module framework. The declarative nature of the product has been extended to include an extensible library of common components that can be used to easily build simple to complex data integration solutions. Big usability improvements through consistent interactions of components and concepts all constructed around the familiar knowledge module framework provide the utmost flexibility. Here is a little taster: So what is a mapping? A mapping comprises of a logical design and at least one physical design, it may have many. A mapping can have many targets, of any technology and can be arbitrarily complex. You can build reusable mappings and use them in other mappings or other reusable mappings. In the example below all of the information from an Oracle bonus table and a bonus file are joined with an Oracle employees table before being written to a target. Some things that are cool include the one-click expression cross referencing so you can easily see what's used where within the design. The logical design in a mapping describes what you want to accomplish  (see the animated GIF here illustrating how the above mapping was designed) . The physical design lets you configure how it is to be accomplished. So you could have one logical design that is realized as an initial load in one physical design and as an incremental load in another. In the physical design below we can customize how the mapping is accomplished by picking Knowledge Modules, in ODI 12c you can pick multiple nodes (on logical or physical) and see common properties. This is useful as we can quickly compare property values across objects - below we can see knowledge modules settings on the access points between execution units side by side, in the example one table is retrieved via database links and the other is an external table. In the logical design I had selected an append mode for the integration type, so by default the IKM on the target will choose the most suitable/default IKM - which in this case is an in-built Oracle Insert IKM (see image below). This supports insert and select hints for the Oracle database (the ANSI SQL Insert IKM does not support these), so by default you will get direct path inserts with Oracle on this statement. In ODI 12c, the mapper is just that, a mapper. Design your mapping, write to multiple targets, the targets can be in the same data server, in different data servers or in totally different technologies - it does not matter. ODI 12c will derive and generate a plan that you can use or customize with knowledge modules. Some of the use cases which are greatly simplified include multiple heterogeneous targets, multi target inserts for Oracle and writing of XML. Let's switch it up now and look at a slightly different example to illustrate expression reuse. In ODI you can define reusable expressions using user functions. These can be reused across mappings and the implementations specialized per technology. So you can have common expressions across Oracle, SQL Server, Hive etc. shielding the design from the physical aspects of the generated language. Another way to reuse is within a mapping itself. In ODI 12c expressions can be defined and reused within a mapping. Rather than replicating the expression text in larger expressions you can decompose into smaller snippets, below you can see UNIT_TAX AMOUNT has been defined and is used in two downstream target columns - its used in the TOTAL_TAX_AMOUNT plus its used in the UNIT_TAX_AMOUNT (a recording of the calculation).  You can see the columns that the expressions depend on (upstream) and the columns the expression is used in (downstream) highlighted within the mapper. Also multi selecting attributes is a convenient way to see what's being used where, below I have selected the TOTAL_TAX_AMOUNT in the target datastore and the UNIT_TAX_AMOUNT in UNIT_CALC. You can now see many expressions at once now and understand much more at the once time without needlessly clicking around and memorizing information. Our mantra during development was to keep it simple and make the tool more powerful and do even more for the user. The development team was a fusion of many teams from Oracle Warehouse Builder, Sunopsis and BEA Aqualogic, debating and perfecting the mapper in ODI 12c. This was quite a project from supporting the capabilities of ODI in 11g to building the flow based mapping tool to support the future. I hope this was a useful insight, there is so much more to come on this topic, this is just a preview of much more that you will see of the mapper in ODI 12c.

    Read the article

  • Personal search – the future of search

    - by jamiet
    [Four months ago I wrote a meandering blog post on another blogging site entitled Personal search – the future of search. The points I made therein are becoming more relevant to what I'm reading about and hoping to get involved in in the future so I'm re-posting here to a wider audience to hopefully get some more feedback and guage reaction to it. This has been prompted by the book Pull by David Siegel that is forming my current holiday reading (recommended to me by a commenter on my previous post Interesting things – Twitter annotations and your phone as a web server) and in particular by Siegel's notion of us all in the future having a personal online data vault.] My one-time colleague Paul Dawson recently wrote an article called The Future of Search and in it he proposed some interesting ideas. Some choice quotes: The growth of Chinese search giant Baidu is an indicator that fully localised and tailored content and offerings have great traction with local audiences This trend is already driving an increase in the use of specialist searches … Look at how Farecast is now integrated into Bing for example, or how Flightstats is now integrated into Google. Search does not necessarily have to begin with a keyword, but could start instead with a click or a touch. Take a look at Retrievr. Start drawing a picture in the box and see what happens. This is certainly search without the need for typing in keywords search technology has advanced greatly in recent years. The recent launch of Microsoft Live Labs’ Pivot has given us a taste of what we can expect to see in the future This really got me thinking about where search might go in the future and as my mind wandered I realised that as the amount of data that we collect about ourselves increases so too will the need and the desire to search it. The amount of electronic data that exists about each and every person is increasing and in the near future I fully expect that we are going to be able to store personal data such as: A history of our location (in fact Google Latitude already offers this facility) Recordings of all our phone conversations Health information history (weight, blood pressure etc…) Energy usage Spending history What films we watch, what radio stations we listen to Voting history Of course, most of this stuff is already stored somewhere but crucially we don’t have easy access to it. My utilities supplier knows how much electricity I’m using but if I want to know for myself I have to go and dig through my statements (assuming I have kept them). Similarly my doctor probably has ready access to all of my health records, my bank knows exactly what I have spent my money on, my cable supplier knows what I watch on TV and my mobile phone supplier probably knows exactly where I am and where I’ve been for the past few years. Strange then that none of this electronic information is available to me in a way that I can really make use of it; after all, its MY information. Its MY data. I created it. That is set to change. As technologies mature and customers become more technically cognizant they will demand more access to the data that companies hold about them. The companies themselves will realise the benefit that they derive from giving users what they want and will embrace ways of providing it. As a result the amount of data that we store about ourselves is going to increase exponentially and the desire to search and derive value from that data is going to grow with it; we are about to enter the era of the “personal datastore” and we will want, and need, to search through it in order to make sense of it all. Its interesting then that today when we think of search we think of search engines and yet in these personal datastores we’re referring to data that search engines can’t touch because WE own it and we (hopefully) choose to keep it private. Someone, I know not who, is going to lead in this space by making it easy for us to search our data and retrieve information that we have either forgotten or maybe didn’t even know in the first place. We will learn new things about ourselves and about our habits; we will share these findings with whomever we choose; we will compare what we discover with others; we will collaborate for mutual benefit and, most of all, we will educate ourselves as to how to live our lives better. Search will be the means to that end, it will enable us to make sense of the wealth of information that we will collect day in day out. The future of search is personal, why would we be interested in anything else? @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Personal search – the future of search

    - by jamiet
    [Four months ago I wrote a meandering blog post on another blogging site entitled Personal search – the future of search. The points I made therein are becoming more relevant to what I'm reading about and hoping to get involved in in the future so I'm re-posting here to a wider audience to hopefully get some more feedback and guage reaction to it. This has been prompted by the book Pull by David Siegel that is forming my current holiday reading (recommended to me by a commenter on my previous post Interesting things – Twitter annotations and your phone as a web server) and in particular by Siegel's notion of us all in the future having a personal online data vault.] My one-time colleague Paul Dawson recently wrote an article called The Future of Search and in it he proposed some interesting ideas. Some choice quotes: The growth of Chinese search giant Baidu is an indicator that fully localised and tailored content and offerings have great traction with local audiences This trend is already driving an increase in the use of specialist searches … Look at how Farecast is now integrated into Bing for example, or how Flightstats is now integrated into Google. Search does not necessarily have to begin with a keyword, but could start instead with a click or a touch. Take a look at Retrievr. Start drawing a picture in the box and see what happens. This is certainly search without the need for typing in keywords search technology has advanced greatly in recent years. The recent launch of Microsoft Live Labs’ Pivot has given us a taste of what we can expect to see in the future This really got me thinking about where search might go in the future and as my mind wandered I realised that as the amount of data that we collect about ourselves increases so too will the need and the desire to search it. The amount of electronic data that exists about each and every person is increasing and in the near future I fully expect that we are going to be able to store personal data such as: A history of our location (in fact Google Latitude already offers this facility) Recordings of all our phone conversations Health information history (weight, blood pressure etc…) Energy usage Spending history What films we watch, what radio stations we listen to Voting history Of course, most of this stuff is already stored somewhere but crucially we don’t have easy access to it. My utilities supplier knows how much electricity I’m using but if I want to know for myself I have to go and dig through my statements (assuming I have kept them). Similarly my doctor probably has ready access to all of my health records, my bank knows exactly what I have spent my money on, my cable supplier knows what I watch on TV and my mobile phone supplier probably knows exactly where I am and where I’ve been for the past few years. Strange then that none of this electronic information is available to me in a way that I can really make use of it; after all, its MY information. Its MY data. I created it. That is set to change. As technologies mature and customers become more technically cognizant they will demand more access to the data that companies hold about them. The companies themselves will realise the benefit that they derive from giving users what they want and will embrace ways of providing it. As a result the amount of data that we store about ourselves is going to increase exponentially and the desire to search and derive value from that data is going to grow with it; we are about to enter the era of the “personal datastore” and we will want, and need, to search through it in order to make sense of it all. Its interesting then that today when we think of search we think of search engines and yet in these personal datastores we’re referring to data that search engines can’t touch because WE own it and we (hopefully) choose to keep it private. Someone, I know not who, is going to lead in this space by making it easy for us to search our data and retrieve information that we have either forgotten or maybe didn’t even know in the first place. We will learn new things about ourselves and about our habits; we will share these findings with whomever we choose; we will compare what we discover with others; we will collaborate for mutual benefit and, most of all, we will educate ourselves as to how to live our lives better. Search will be the means to that end, it will enable us to make sense of the wealth of information that we will collect day in day out. The future of search is personal, why would we be interested in anything else? @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • ODI 12c - Aggregating Data

    - by David Allan
    This posting will look at the aggregation component that was introduced in ODI 12c. For many ETL tool users this shouldn't be a big surprise, its a little different than ODI 11g but for good reason. You can use this component for composing data with relational like operations such as sum, average and so forth. Also, Oracle SQL supports special functions called Analytic SQL functions, you can use a specially configured aggregation component or the expression component for these now in ODI 12c. In database systems an aggregate transformation is a transformation where the values of multiple rows are grouped together as input on certain criteria to form a single value of more significant meaning - that's exactly the purpose of the aggregate component. In the image below you can see the aggregate component in action within a mapping, for how this and a few other examples are built look at the ODI 12c Aggregation Viewlet here - the viewlet illustrates a simple aggregation being built and then some Oracle analytic SQL such as AVG(EMP.SAL) OVER (PARTITION BY EMP.DEPTNO) built using both the aggregate component and the expression component. In 11g you used to just write the aggregate expression directly on the target, this made life easy for some cases, but it wan't a very obvious gesture plus had other drawbacks with ordering of transformations (agg before join/lookup. after set and so forth) and supporting analytic SQL for example - there are a lot of postings from creative folks working around this in 11g - anything from customizing KMs, to bypassing aggregation analysis in the ODI code generator. The aggregate component has a few interesting aspects. 1. Firstly and foremost it defines the attributes projected from it - ODI automatically will perform the grouping all you do is define the aggregation expressions for those columns aggregated. In 12c you can control this automatic grouping behavior so that you get the code you desire, so you can indicate that an attribute should not be included in the group by, that's what I did in the analytic SQL example using the aggregate component. 2. The component has a few other properties of interest; it has a HAVING clause and a manual group by clause. The HAVING clause includes a predicate used to filter rows resulting from the GROUP BY clause. Because it acts on the results of the GROUP BY clause, aggregation functions can be used in the HAVING clause predicate, in 11g the filter was overloaded and used for both having clause and filter clause, this is no longer the case. If a filter is after an aggregate, it is after the aggregate (not sometimes after, sometimes having).  3. The manual group by clause let's you use special database grouping grammar if you need to. For example Oracle has a wealth of highly specialized grouping capabilities for data warehousing such as the CUBE function. If you want to use specialized functions like that you can manually define the code here. The example below shows the use of a manual group from an example in the Oracle database data warehousing guide where the SUM aggregate function is used along with the CUBE function in the group by clause. The SQL I am trying to generate looks like the following from the data warehousing guide; SELECT channel_desc, calendar_month_desc, countries.country_iso_code,       TO_CHAR(SUM(amount_sold), '9,999,999,999') SALES$ FROM sales, customers, times, channels, countries WHERE sales.time_id=times.time_id AND sales.cust_id=customers.cust_id AND   sales.channel_id= channels.channel_id  AND customers.country_id = countries.country_id  AND channels.channel_desc IN   ('Direct Sales', 'Internet') AND times.calendar_month_desc IN   ('2000-09', '2000-10') AND countries.country_iso_code IN ('GB', 'US') GROUP BY CUBE(channel_desc, calendar_month_desc, countries.country_iso_code); I can capture the source datastores, the filters and joins using ODI's dataset (or as a traditional flow) which enables us to incrementally design the mapping and the aggregate component for the sum and group by as follows; In the above mapping you can see the joins and filters declared in ODI's dataset, allowing you to capture the relationships of the datastores required in an entity-relationship style just like ODI 11g. The mix of ODI's declarative design and the common flow design provides for a familiar design experience. The example below illustrates flow design (basic arbitrary ordering) - a table load where only the employees who have maximum commission are loaded into a target. The maximum commission is retrieved from the bonus datastore and there is a look using employees as the driving table and only those with maximum commission projected. Hopefully this has given you a taster for some of the new capabilities provided by the aggregate component in ODI 12c. In summary, the actions should be much more consistent in behavior and more easily discoverable for users, the use of the components in a flow graph also supports arbitrary designs and the tool (rather than the interface designer) takes care of the realization using ODI's knowledge modules. Interested to know if a deep dive into each component is interesting for folks. Any thoughts? 

    Read the article

  • Best practices for sending automated daily emails from web service

    - by Tauren
    I am running a web service that currently sends confirmation emails out to new users via the gmail smtp servers. As I'm only getting a few new users each day, this hasn't been a problem. I've recently added new features to the webapp that will require a customized message to be sent out to each user every day. Think of this as similar to the regular messages LinkedIn sends out that give you a status report on the activity in your network. Every user's message will be different. With thousands of users, this means thousands of unique messages will be sent each day. Edit: I've since found that these types of email are called "transactional or relationship messages". Spamtacular has a good article on differentiating between marketing and transactional email. I don't think using gmail's smtp servers will cut it anymore, but I don't know that for sure. I don't know what gmail's maximum outgoing messages per account is (it might be 100/day), but they limit outgoing mail to 500 recipients per message. I'm not sending a single message to 500 recipients, but I'm going to be sending 1000's of customized messages with each recipient getting one per day. I'm interested to learn any best practices for doing this (especially for Java-based webapps). Here are some of my thoughts and concerns on it: Should I set up my own outgoing mail server? If I do this, it seems like I'll have all sorts of other issues to worry about, such as preventing mail server abuse, monitoring bounces, allowing ways to opt-out of emails, etc. Are there any tools or services to help with this? Maybe something like OpenEMM or a services like MailChimp? But those seem focused more toward email marketing campaigns. I don't think I should have the webapp itself handle sending emails as it currently is for new user signups. I'm thinking I should setup a separate messaging server that can access the same backend/datastore as the webapp. Thoughts on this? Should I consider setting up some sort of message queueing service to help with this, such as JMS, RabbitMQ, ActiveMQ, etc.? Do I need to provide users a way to opt-out? Do I need to flag these as bulk messages? I don't really consider these email marketing messages, but I'm unsure what is considered appropriate or proper netiquette. Any advice is appreciated. I'm also very interested in open source tools or web services that simplify things and could help me to ramp up as quickly as possible. Thanks!

    Read the article

  • NoSQL DB for .Net document-based database (ECM)

    - by Dane
    I'm halfway through coding a basic multi-tenant SaaS ECM solution. Each client has it's own instance of the database / datastore, but the .Net app is single instance. The documents are pretty much read only (i.e. an image archive of tiffs or PDFs) I've used MSSQL so far, but then started thinking this might be viable in a NoSQL DB (e.g. MongoDB, CouchDB). The basic premise is that it stores documents, each with their own particular indexes. Each tenant can have multiple document types. e.g. One tenant might have an invoice type, which has Customer ID, Invoice Number and Invoice Date. Another tenant might have an application form, which has Member Number, Application Number, Member Name, and Application Date. So far I've used the old method which Sharepoint (used?) to use, and created a document table which has int_field_1, int_field_2, date_field_1, date_field_2, etc. Then, I've got a "mapping" table which stores the customer specific index name, and the database field that will map to. I've avoided the key-value pair model in the DB due to volume of documents. This way, we can support multiple document types in the one table, and get reasonably high performance out of it, and allow for custom document type searches (i.e. user selects a document type, then they're presented with a list of search fields). However, a NoSQL DB might make this a lot simpler, as I don't need to worry about denormalizing the document. However, I've just got concerns about the rest of the data around a document. We store an "action history" against the document. This tracks views, whether someone emails the document from within the system, and other "future" functionality (e.g. faxing). We have control over the document load process, so we can manipulate the data however it needs to be to get it in the document store (e.g. assign unique IDs). Users will not be adding in their own documents, so we shouldn't need to worry about ACID compliance, as the documents are relatively static. So, my questions I guess : Is a NoSQL DB a good fit Is MongoDB the best for Asp.Net (I saw Raven and Velocity, but they're still kinda beta) Can I store a key for each document, and then store the action history in a MSSQL DB with this key? I don't need to do joins, it would be if a person clicks "View History" against a document. How would performance compare between the two (NoSQL DB vs denormalized "document" table) Volumes would be up to 200,000 new documents per month for a single tenant. My current scaling plan with the SQL DB involves moving the SQL DB into a cluster when certain thresholds are reached, and then reviewing partitioning and indexing structures.

    Read the article

  • One-to-many relationship with JDO in Google App Engine

    - by Marvin
    I've followed the GAE docs on setting up one-to-many relationship in JDO but I'm still having trouble in retrieving the collection data back. I have no problem getting the other non-collection fields back. Here are my classes: @PersistenceCapable public class User{ @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Key key; @Persistent private String uniqueId; @Persistent private String email; @Persistent private List<Address> addresses = new ArrayList<Address>() ; ... } @PersistenceCapable public class Phone{ @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Key key; @Persistent private String number; ... } public class UserDaoImpl implements UserDao { public void insertUser(User user) { if(user.getKey() == null) { com.google.appengine.api.datastore.Key key = KeyFactory.createKey(User.class.getSimpleName(), user.getEmail()); user.setKey(key); } PersistenceManager pm = PersistenceManagerWrapper.getPersistenceManager(); notNull(user); try { pm.makePersistent(user); } finally { pm.close(); } } @SuppressWarnings("unchecked") public User getUser(String uniqueId) { PersistenceManager pm = PersistenceManagerWrapper.getPersistenceManager(); Query query = pm.newQuery(User.class); query.setFilter("uniqueId == uniqueIdParam"); query.declareParameters("String uniqueIdParam"); User user = null; try { List<User> users = (List<User>)(query.execute(uniqueId)); //TODO abstract this if(users.size() > 0) user = users.get(0); } finally { pm.close(); } return user; } } public class UserDaoImplTest { @Test public void getUserTest() { User user = createTestUser(); assertNotNull("The user object should not be null", user); userDao.insertUser(user); User returnedUser = userDao.getUser(TEST_USER_ID); assertNotNull("The returnedUser object should not be null", returnedUser); Assert.assertPropertyEqualsExcludeProperties("User Object", user, returnedUser, ""); } } When I run the test, all the properties for User is populated but the list of Phone if I get is empty.

    Read the article

  • Speeding up templates in GAE-Py by aggregating RPC calls

    - by Sudhir Jonathan
    Here's my problem: class City(Model): name = StringProperty() class Author(Model): name = StringProperty() city = ReferenceProperty(City) class Post(Model): author = ReferenceProperty(Author) content = StringProperty() The code isn't important... its this django template: {% for post in posts %} <div>{{post.content}}</div> <div>by {{post.author.name}} from {{post.author.city.name}}</div> {% endfor %} Now lets say I get the first 100 posts using Post.all().fetch(limit=100), and pass this list to the template - what happens? It makes 200 more datastore gets - 100 to get each author, 100 to get each author's city. This is perfectly understandable, actually, since the post only has a reference to the author, and the author only has a reference to the city. The __get__ accessor on the post.author and author.city objects transparently do a get and pull the data back (See this question). Some ways around this are Use Post.author.get_value_for_datastore(post) to collect the author keys (see the link above), and then do a batch get to get them all - the trouble here is that we need to re-construct a template data object... something which needs extra code and maintenance for each model and handler. Write an accessor, say cached_author, that checks memcache for the author first and returns that - the problem here is that post.cached_author is going to be called 100 times, which could probably mean 100 memcache calls. Hold a static key to object map (and refresh it maybe once in five minutes) if the data doesn't have to be very up to date. The cached_author accessor can then just refer to this map. All these ideas need extra code and maintenance, and they're not very transparent. What if we could do @prefetch def render_template(path, data) template.render(path, data) Turns out we can... hooks and Guido's instrumentation module both prove it. If the @prefetch method wraps a template render by capturing which keys are requested we can (atleast to one level of depth) capture which keys are being requested, return mock objects, and do a batch get on them. This could be repeated for all depth levels, till no new keys are being requested. The final render could intercept the gets and return the objects from a map. This would change a total of 200 gets into 3, transparently and without any extra code. Not to mention greatly cut down the need for memcache and help in situations where memcache can't be used. Trouble is I don't know how to do it (yet). Before I start trying, has anyone else done this? Or does anyone want to help? Or do you see a massive flaw in the plan?

    Read the article

  • java multipart POST library

    - by tom
    Is there a multipart POST library out there that achieve the same effect of doing a POST from a html form? for example - upload a file programmingly in Java versus upload the file using a html form. And on the server side, it just blindly expect the request from client side to be a multipart POST request and parse out the data as appropriate. Has anyone tried this? specifically, I am trying to see if I can simulate the following with Java The user creates a blob by submitting an HTML form that includes one or more file input fields. Your app sets blobstoreService.createUploadUrl() as the destination (action) of this form, passing the function a URL path of a handler in your app. When the user submits the form, the user's browser uploads the specified files directly to the Blobstore. The Blobstore rewrites the user's request and stores the uploaded file data, replacing the uploaded file data with one or more corresponding blob keys, then passes the rewritten request to the handler at the URL path you provided to blobstoreService.createUploadUrl(). This handler can do additional processing based on the blob key. Finally, the handler must return a headers-only, redirect response (301, 302, or 303), typically a browser redirect to another page indicating the status of the blob upload. Set blobstoreService.createUploadUrl as the form action, passing the application path to load when the POST of the form is completed. <body> <form action="<%= blobstoreService.createUploadUrl("/upload") %>" method="post" enctype="multipart/form-data"> <input type="file" name="myFile"> <input type="submit" value="Submit"> </form> </body> Note that this is how the upload form would look if it were created as a JSP. The form must include a file upload field, and the form's enctype must be set to multipart/form-data. When the user submits the form, the POST is handled by the Blobstore API, which creates the blob. The API also creates an info record for the blob and stores the record in the datastore, and passes the rewritten request to your app on the given path as a blob key.

    Read the article

  • Sessions not persisting between requests

    - by klonq
    My session objects are only stored within the request scope on google app engine and I can't figure out how to persist objects between requests. The docs are next to useless on this matter and I can't find anyone who's experienced a similar problem. Please help. When I store session objects in the servlet and forward the request to a JSP using: getServletContext().getRequestDispatcher("/example.jsp").forward(request,response); Everything works like it should. But when I store objects to the session and redirect the request using: response.sendRedirect("/example/url"); The session objects are lost to the ether. In fact when I dump session key/value pairs on new requests there is absolutely nothing, session objects only appear within the request scope of servlets which create session objects. It appears to me that the objects are not being written to Memcache or Datastore. In terms of configuring sessions for my application I have set <sessions-enabled>true</sessions-enabled> In appengine-web.xml. Is there anything else I am missing? The single paragraph of documentation on sessions also notes that only objects which implement Serializable can be stored in the session between requests. I have included an example of the code which is not working below. The obvious solution is to not use redirects, and this might be ok for the example given below but some application data does need to be stored in the session between requests so I need to find a solution to this problem. EXAMPLE: The class FlashMessage gives feedback to the user from server-side operations. if (email.send()) { FlashMessage flash = new FlashMessage(FlashMessage.SUCCESS, "Your message has been sent."); session.setAttribute(FlashMessage.SESSION_KEY, flash); // The flash message will not be available in the session object in the next request response.sendRedirect(URL.HOME); } else { FlashMessage flash = new FlashMessage(FlashMessage.ERROR, FlashMessage.INVALID_FORM_DATA); session.setAttribute(FlashMessage.SESSION_KEY, flash); // The flash message is displayed without problem getServletContext().getRequestDispatcher(Templates.CONTACT_FORM).forward(request,response); } FlashMessage.java import java.io.Serializable; public class FlashMessage implements Serializable { private static final long serialVersionUID = 8109520737272565760L; // I have tried using different, default and no serialVersionUID public static final String SESSION_KEY = "flashMessage"; public static final String ERROR = "error"; public static final String SUCCESS = "success"; public static final String INVALID_FORM_DATA = "Your request failed to validate."; private String message; private String type; public FlashMessage (String type, String message) { this.type = type; this.message = message; } public String display(){ return "<div id='flash' class='" + type + "'>" + message + "</div>"; } }

    Read the article

  • How do I set up MVP for a Winforms solution?

    - by JonWillis
    Question moved from Stackoverflow - http://stackoverflow.com/questions/4971048/how-do-i-set-up-mvp-for-a-winforms-solution I have used MVP and MVC in the past, and I prefer MVP as it controls the flow of execution so much better in my opinion. I have created my infrastructure (datastore/repository classes) and use them without issue when hard coding sample data, so now I am moving onto the GUI and preparing my MVP. Section A I have seen MVP using the view as the entry point, that is in the views constructor method it creates the presenter, which in turn creates the model, wiring up events as needed. I have also seen the presenter as the entry point, where a view, model and presenter are created, this presenter is then given a view and model object in its constructor to wire up the events. As in 2, but the model is not passed to the presenter. Instead the model is a static class where methods are called and responses returned directly. Section B In terms of keeping the view and model in sync I have seen. Whenever a value in the view in changed, i.e. TextChanged event in .Net/C#. This fires a DataChangedEvent which is passed through into the model, to keep it in sync at all times. And where the model changes, i.e. a background event it listens to, then the view is updated via the same idea of raising a DataChangedEvent. When a user wants to commit changes a SaveEvent it fires, passing through into the model to make the save. In this case the model mimics the view's data and processes actions. Similar to #b1, however the view does not sync with the model all the time. Instead when the user wants to commit changes, SaveEvent is fired and the presenter grabs the latest details and passes them into the model. in this case the model does not know about the views data until it is required to act upon it, in which case it is passed all the needed details. Section C Displaying of business objects in the view, i.e. a object (MyClass) not primitive data (int, double) The view has property fields for all its data that it will display as domain/business objects. Such as view.Animals exposes a IEnumerable<IAnimal> property, even though the view processes these into Nodes in a TreeView. Then for the selected animal it would expose SelectedAnimal as IAnimal property. The view has no knowledge of domain objects, it exposes property for primitive/framework (.Net/Java) included objects types only. In this instance the presenter will pass an adapter object the domain object, the adapter will then translate a given business object into the controls visible on the view. In this instance the adapter must have access to the actual controls on the view, not just any view so becomes more tightly coupled. Section D Multiple views used to create a single control. i.e. You have a complex view with a simple model like saving objects of different types. You could have a menu system at the side with each click on an item the appropriate controls are shown. You create one huge view, that contains all of the individual controls which are exposed via the views interface. You have several views. You have one view for the menu and a blank panel. This view creates the other views required but does not display them (visible = false), this view also implements the interface for each view it contains (i.e. child views) so it can expose to one presenter. The blank panel is filled with other views (Controls.Add(myview)) and ((myview.visible = true). The events raised in these "child"-views are handled by the parent view which in turn pass the event to the presenter, and visa versa for supplying events back down to child elements. Each view, be it the main parent or smaller child views are each wired into there own presenter and model. You can literately just drop a view control into an existing form and it will have the functionality ready, just needs wiring into a presenter behind the scenes. Section E Should everything have an interface, now based on how the MVP is done in the above examples will affect this answer as they might not be cross-compatible. Everything has an interface, the View, Presenter and Model. Each of these then obviously has a concrete implementation. Even if you only have one concrete view, model and presenter. The View and Model have an interface. This allows the views and models to differ. The presenter creates/is given view and model objects and it just serves to pass messages between them. Only the View has an interface. The Model has static methods and is not created, thus no need for an interface. If you want a different model, the presenter calls a different set of static class methods. Being static the Model has no link to the presenter. Personal thoughts From all the different variations I have presented (most I have probably used in some form) of which I am sure there are more. I prefer A3 as keeping business logic reusable outside just MVP, B2 for less data duplication and less events being fired. C1 for not adding in another class, sure it puts a small amount of non unit testable logic into a view (how a domain object is visualised) but this could be code reviewed, or simply viewed in the application. If the logic was complex I would agree to an adapter class but not in all cases. For section D, i feel D1 creates a view that is too big atleast for a menu example. I have used D2 and D3 before. Problem with D2 is you end up having to write lots of code to route events to and from the presenter to the correct child view, and its not drag/drop compatible, each new control needs more wiring in to support the single presenter. D3 is my prefered choice but adds in yet more classes as presenters and models to deal with the view, even if the view happens to be very simple or has no need to be reused. i think a mixture of D2 and D3 is best based on circumstances. As to section E, I think everything having an interface could be overkill I already do it for domain/business objects and often see no advantage in the "design" by doing so, but it does help in mocking objects in tests. Personally I would see E2 as a classic solution, although have seen E3 used in 2 projects I have worked on previously. Question Am I implementing MVP correctly? Is there a right way of going about it? I've read Martin Fowler's work that has variations, and I remember when I first started doing MVC, I understood the concept, but could not originally work out where is the entry point, everything has its own function but what controls and creates the original set of MVC objects.

    Read the article

  • CodePlex Daily Summary for Monday, January 10, 2011

    CodePlex Daily Summary for Monday, January 10, 2011Popular ReleasesSense/Net Enterprise Portal & ECMS: SenseNet 6.0.1 Community Edition for .NET 4: SenseNet 6.0.1 Community Edition for .NET 4 with SQL CE 4.0 This half year we have been working quite fiercely to bring you the long-awaited release of Sense/Net 6.0. Download this Community Edition for .NET 4 Platform to see what we have been up to. These months we have worked on getting the WebCMS capabilities of Sense/Net 6.0 up to par. New features include: New, powerful page and portlet editing experience. HTML and CSS cleanup, new, powerful site skinning system. Upgraded, light...Agile Personal Body Of Knowledge: ????-????,???? v0.2.pdf: ????【????-????,????.pdf】???,?????????????????????????????,???????????,???????。 ??????????,??????????,????????,?????????! ????sina??:http://q.t.sina.com.cn/135484VSSpeedster - Parallel Builds for VS: VSSpeedster 1.1: - Parallel Builds with MSBuild integrated in Visual StudioBernie's Trackviewer: Bernie's Trackviewer Version 1.2: Redesigned user interface of main form Also displays waypoints which are not part of a track Can convert a route int a track Maximum age of cached maps can be setPeople's Note: People's Note 0.21: Replaced note viewer buttons with a menu bar to improve scrolling performance. Fixed database relocation on low-resolution devices; thanks to compaNet for reporting. Improved signin error messages. To install: copy the appropriate CAB file onto your WM device and run it.mytrip.mvc (CMS & e-Commerce): mytrip.mvc 1.0.51.0 beta2: WEB.mytrip.mvc 1.0.51.0 Web for install hosting System Requirements: NET 4.0, MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) SRC.mytrip.mvc 1.0.51.0 System Requirements: Visual Studio 2010 or Web Deweloper 2010 MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) Connector/Net 6.3.5, MVC3 RC WARNING For run and debug SRC.mytrip.mvc 1.0.51.0 download and install MVC3 RC...EnhSim: EnhSim 2.3.0: 2.3.0This release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Changed how flame shoc...AutoLoL: AutoLoL v1.5.3: A message will be displayed when there's an update available Shows a list of recent mastery files in the Editor Tab (requested by quite a few people) Updater: Update information is now scrollable Added a buton to launch AutoLoL after updating is finished Updated the UI to match that of AutoLoL Fix: Detects and resolves 'Read Only' state on Version.xmlHawkeye - The .Net Runtime Object Editor: Hawkeye 1.2.4: [EDIT: 2010/01/10] In the case you are running an x86 Windows; please wait until Release 1.2.5 is made available: Hawkeye is broken on these OS. This is a maintenance release providing bug fixes. It comes in two flavors: Hawkeye.124.N2 is the standard .NET 2 build, was compiled with Visual Studio 2005 and can only inspect .NET 2 applications. Hawkeye.124.N4 is a .NET4 2 build, was compiled with Visual Studio 2010 and can only inspect .NET 4 applications. Please be patient until Release 1.3...Extended WPF Toolkit: Extended WPF Toolkit - 1.3.0: What's in the 1.3.0 Release?BusyIndicator ButtonSpinner ChildWindow ColorPicker - Updated (Breaking Changes) DateTimeUpDown - New Control Magnifier - New Control MaskedTextBox - New Control MessageBox NumericUpDown RichTextBox RichTextBoxFormatBar - Updated .NET 3.5 binaries and SourcePlease note: The Extended WPF Toolkit 3.5 is dependent on .NET Framework 3.5 and the WPFToolkit. You must install .NET Framework 3.5 and the WPFToolkit in order to use any features in the To...sNPCedit: sNPCedit v0.9d: added elementclient coordinate catcher to catch coordinates select a target (ingame) i.e. your char, npc or monster than click the button and coordinates+direction will be transfered to the selected row in the table corrected labels from Rot to Direction (because it is a vector)Ionics Isapi Rewrite Filter: 2.1 latest stable: V2.1 is stable, and is in maintenance mode. This is v2.1.1.25. It is a bug-fix release. There are no new features. 28629 29172 28722 27626 28074 29164 27659 27900 many documentation updates and fixes proper x64 build environment. This release includes x64 binaries in zip form, but no x64 MSI file. You'll have to manually install x64 servers, following the instructions in the documentation.StyleCop for ReSharper: StyleCop for ReSharper 5.1.14980.000: A considerable amount of work has gone into this release: Huge focus on performance around the violation scanning subsystem: - caching added to reduce IO operations around reading and merging of settings files - caching added to reduce creation of expensive objects Users should notice condsiderable perf boost and a decrease in memory usage. Bug Fixes: - StyleCop's new ObjectBasedEnvironment object does not resolve the StyleCop installation path, thus it does not return the correct path ...VivoSocial: VivoSocial 7.4.1: New release with bug fixes and updates for performance..NET Extensions - Extension Methods Library for C# and VB.NET: Release 2011.03: Added lot's of new extensions and new projects for MVC and Entity Framework. object.FindTypeByRecursion Int32.InRange String.RemoveAllSpecialCharacters String.IsEmptyOrWhiteSpace String.IsNotEmptyOrWhiteSpace String.IfEmptyOrWhiteSpace String.ToUpperFirstLetter String.GetBytes String.ToTitleCase String.ToPlural DateTime.GetDaysInYear DateTime.GetPeriodOfDay IEnumberable.RemoveAll IEnumberable.Distinct ICollection.RemoveAll IList.Join IList.Match IList.Cast Array.IsNullOrEmpty Array.W...EFMVC - ASP.NET MVC 3 and EF Code First: EFMVC 0.5- ASP.NET MVC 3 and EF Code First: Demo web app ASP.NET MVC 3, Razor and EF Code FirstVidCoder: 0.8.0: Added x64 version. Made the audio output preview more detailed and accurate. If the chosen encoder or mixdown is incompatible with the source, the fallback that will be used is displayed. Added "Auto" to the audio mixdown choices. Reworked non-anamorphic size calculation to work better with non-standard pixel aspect ratios and cropping. Reworked Custom anamorphic to be more intuitive and allow display width to be set automatically (Thanks, Statick). Allowing higher bitrates for 6-ch....NET Voice Recorder: Auto-Tune Release: This is the source code and binaries to accompany the article on the Coding 4 Fun website. It is the Auto Tuner release of the .NET Voice Recorder application.BloodSim: BloodSim - 1.3.2.0: - Simulation Log is now automatically disabled and hidden when running 10 or more iterations - Hit and Expertise are now entered by Rating, and include option for a Racial Expertise bonus - Added option for boss to use a periodic magic ability (Dragon Breath) - Added option for boss to periodically Enrage, gaining a Damage/Attack Speed buffJson.NET: Json.NET 4.0 Release 1: New feature - Added Windows Phone 7 project New feature - Added dynamic support to LINQ to JSON New feature - Added dynamic support to serializer New feature - Added INotifyCollectionChanged to JContainer in .NET 4 build New feature - Added ReadAsDateTimeOffset to JsonReader New feature - Added ReadAsDecimal to JsonReader New feature - Added covariance to IJEnumerable type parameter New feature - Added XmlSerializer style Specified property support New feature - Added ...New ProjectsAssimpXna: AssimpXna is a custom model importer for Xna 4.0 using the Open Asset Import Library (Assimp).ATCSim: This is an atc sim for a school projectAzure Role-Based Deployment: Azure Role-Based Deployment demonstrates how to use the CreateDeployment Windows Azure Service Management API to deploy an app from within a web role. This code can easily be ported to a worker role and thus included in the managment pack for a hosted service.CodeKata AltNet Hispano: Ejemplos de Code Kata usados por la comunidad AltNet Hispano.DataStoreCleaner: DataStoreCleaner clears "DataStore" folder which manages Windows Update History. It is useful for fixing WU error, or tune up Windows start-up. It's developed in C#.DS_HW2: dshw2EFT Calculator: EFT Calculator is an application that performs common cryptographic operations used in electronic funds transfer applications.Entity Visualizers: This project has debugger visualizers for several objects in the Entity Framework: EntityObject, EntityCollection, ObjectQuery and ObjectContext. Some of the source code is based on code from Julie Lerman's book "Programming Entity Framework".EzyCMS - Easy and Simple CMS made by ASP.Net MVC: EzyCMS makes both of end user and developer enjoy CMS benefit and extendability to perform requirements. The design principles: EASY TO USE, EASY TO EXTEND, FLEXIBLE AND PWOERFUL TECHNOLOGY: ASP.Net MVC2, NHibernate, StructureMap, JQueryFlatStore: Simple library to simplify storage of application data when a bulky dedicated database is cumbersome and unnecessaryGigantornis: Gigantornis is a tool for benchmarking your Hypertext Transfer Protocol (HTTP) server. It is designed to give you an impression of how your current server installation performs. This especially shows you how many requests per second your server installation is capable of serving.Gonte.Dal: Data access layer for NETLezatrus: Lezatrus is the open source project to help people find places to eat in Jakarta. It's developed in ASP.NET MVC using Razor and C#. It's the sample app for Pro ASP.NET MVC Coding Ninja facebook group.Moo: Moo is an object-to-object multi-mapper. It is able to use multiple different strategies (in a mix of convention, configuration, attributes and fluent calls) when mapping from one object to another.MvcXaml: A custom View Engine for ASP.NET MVC that allows Controller Action Methods to return dynamically generated images based on XAML markup.Perfect World Bot Development FrameWork: <empty yet>Silverlight motion detection: Motion detection using Silverlight 4 camera support and a simple motion detection algorithm.Small IT Business Manager: Small IT Business Manager is a tool being created keeping small-midsize IT companies in mind to allow them manage their day to day chores. Management Features planned: * Workers * Timesheets * Financial * HR * Basic Project Management * Invoicingsomething for testing: mot do an mau de test cac van de lien quan toi codeStructure Copier: This small program is supposed to copy tree structure of directory.TogNet: A small utility program to toggle between windows network adapters. Needed a program like this to switch between an external Wireless network and the corporate Lan network adapter.

    Read the article

  • Performance required to improve Windows Experience Index?

    - by Ian Boyd
    Is there a guide on the metrics required to obtain a certain Windows Experience Index? A Microsoft guy said in January 2009: On the matter of transparency, it is indeed our plan to disclose in great detail how the scores are calculated, what the tests attempt to measure, why, and how they map to realistic scenarios and usage patterns. Has that amount of transparency happened? Is there a technet article somewhere? If my score was limited by my Memory subscore of 5.9. A nieve person would suggest: Buy a faster RAM Which is wrong of course. From the Windows help: If your computer has a 64-bit central processing unit (CPU) and 4 gigabytes (GB) or less random access memory (RAM), then the Memory (RAM) subscore for your computer will have a maximum of 5.9. You can buy the fastest, overclocked, liquid-cooled, DDR5 RAM on the planet; you'll still have a maximum Memory subscore of 5.9. So in general the knee-jerk advice "buy better stuff" is not helpful. What i am looking for is attributes required to achieve a certain score, or move beyond a current limitation. The information i've been able to compile so far, chiefly from 3 Windows blog entries, and an article: Memory subscore Score Conditions ======= ================================ 1.0 < 256 MB 2.0 < 500 MB 2.9 <= 512 MB 3.5 < 704 MB 3.9 < 944 MB 4.5 <= 1.5 GB 5.9 < 4.0GB-64MB on a 64-bit OS Windows Vista highest score 7.9 Windows 7 highest score Graphics Subscore Score Conditions ======= ====================== 1.0 doesn't support DX9 1.9 doesn't support WDDM 4.9 does not support Pixel Shader 3.0 5.9 doesn't support DX10 or WDDM1.1 Windows Vista highest score 7.9 Windows 7 highest score Gaming graphics subscore Score Result ======= ============================= 1.0 doesn't support D3D 2.0 supports D3D9, DX9 and WDDM 5.9 doesn't support DX10 or WDDM1.1 Windows Vista highest score 6.0-6.9 good framerates (e.g. 40-50fps) at normal resoltuions (e.g. 1280x1024) 7.0-7.9 even higher framerates at even higher resolutions 7.9 Windows 7 highest score Processor subscore Score Conditions ======= ========================================================================== 5.9 Windows Vista highest score 6.0-6.9 many quad core processors will be able to score in the high 6 low 7 ranges 7.0+ many quad core processors will be able to score in the high 6 low 7 ranges 7.9 8-core systems will be able to approach 8.9 Windows 7 highest score Primary hard disk subscore (note) Score Conditions ======= ======================================== 1.9 Limit for pathological drives that stop responding when pending writes 2.0 Limit for pathological drives that stop responding when pending writes 2.9 Limit for pathological drives that stop responding when pending writes 3.0 Limit for pathological drives that stop responding when pending writes 5.9 highest you're likely to see without SSD Windows Vista highest score 7.9 Windows 7 highest score Bonus Chatter You can find your WEI detailed test results in: C:\Windows\Performance\WinSAT\DataStore e.g. 2011-11-06 01.00.19.482 Disk.Assessment (Recent).WinSAT.xml <WinSAT> <WinSPR> <DiskScore>5.9</DiskScore> </WinSPR> <Metrics> <DiskMetrics> <AvgThroughput units="MB/s" score="6.4" ioSize="65536" kind="Sequential Read">89.95188</AvgThroughput> <AvgThroughput units="MB/s" score="4.0" ioSize="16384" kind="Random Read">1.58000</AvgThroughput> <Responsiveness Reason="UnableToAssess" Kind="Cap">TRUE</Responsiveness> </DiskMetrics> </Metrics> </WinSAT> Pre-emptive snarky comment: "WEI is useless, it has no relation to reality" Fine, how do i increase my hard-drive's random I/O throughput? Update - Amount of memory limits rating Some people don't believe Microsoft's statement that having less than 4GB of RAM on a 64-bit edition of Windows doesn't limit the rating to 5.9: And from xxx.Formal.Assessment (Recent).WinSAT.xml: <WinSPR> <LimitsApplied> <MemoryScore> <LimitApplied Friendly="Physical memory available to the OS is less than 4.0GB-64MB on a 64-bit OS : limit mem score to 5.9" Relation="LT">4227858432</LimitApplied> </MemoryScore> </LimitsApplied> </WinSPR> References Windows Vista Team Blog: Windows Experience Index: An In-Depth Look Understand and improve your computer's performance in Windows Vista Engineering Windows 7 Blog: Engineering the Windows 7 “Windows Experience Index”

    Read the article

  • Google App Engine - Secure Cookies

    - by tponthieux
    I'd been searching for a way to do cookie based authentication/sessions in Google App Engine because I don't like the idea of memcache based sessions, and I also don't like the idea of forcing users to create google accounts just to use a website. I stumbled across someone's posting that mentioned some signed cookie functions from the Tornado framework and it looks like what I need. What I have in mind is storing a user's id in a tamper proof cookie, and maybe using a decorator for the request handlers to test the authentication status of the user, and as a side benefit the user id will be available to the request handler for datastore work and such. The concept would be similar to forms authentication in ASP.NET. This code comes from the web.py module of the Tornado framework. According to the docstrings, it "Signs and timestamps a cookie so it cannot be forged" and "Returns the given signed cookie if it validates, or None." I've tried to use it in an App Engine Project, but I don't understand the nuances of trying to get these methods to work in the context of the request handler. Can someone show me the right way to do this without losing the functionality that the FriendFeed developers put into it? The set_secure_cookie, and get_secure_cookie portions are the most important, but it would be nice to be able to use the other methods as well. #!/usr/bin/env python import Cookie import base64 import time import hashlib import hmac import datetime import re import calendar import email.utils import logging def _utf8(s): if isinstance(s, unicode): return s.encode("utf-8") assert isinstance(s, str) return s def _unicode(s): if isinstance(s, str): try: return s.decode("utf-8") except UnicodeDecodeError: raise HTTPError(400, "Non-utf8 argument") assert isinstance(s, unicode) return s def _time_independent_equals(a, b): if len(a) != len(b): return False result = 0 for x, y in zip(a, b): result |= ord(x) ^ ord(y) return result == 0 def cookies(self): """A dictionary of Cookie.Morsel objects.""" if not hasattr(self,"_cookies"): self._cookies = Cookie.BaseCookie() if "Cookie" in self.request.headers: try: self._cookies.load(self.request.headers["Cookie"]) except: self.clear_all_cookies() return self._cookies def _cookie_signature(self,*parts): self.require_setting("cookie_secret","secure cookies") hash = hmac.new(self.application.settings["cookie_secret"], digestmod=hashlib.sha1) for part in parts:hash.update(part) return hash.hexdigest() def get_cookie(self,name,default=None): """Gets the value of the cookie with the given name,else default.""" if name in self.cookies: return self.cookies[name].value return default def set_cookie(self,name,value,domain=None,expires=None,path="/", expires_days=None): """Sets the given cookie name/value with the given options.""" name = _utf8(name) value = _utf8(value) if re.search(r"[\x00-\x20]",name + value): # Don't let us accidentally inject bad stuff raise ValueError("Invalid cookie %r:%r" % (name,value)) if not hasattr(self,"_new_cookies"): self._new_cookies = [] new_cookie = Cookie.BaseCookie() self._new_cookies.append(new_cookie) new_cookie[name] = value if domain: new_cookie[name]["domain"] = domain if expires_days is not None and not expires: expires = datetime.datetime.utcnow() + datetime.timedelta( days=expires_days) if expires: timestamp = calendar.timegm(expires.utctimetuple()) new_cookie[name]["expires"] = email.utils.formatdate( timestamp,localtime=False,usegmt=True) if path: new_cookie[name]["path"] = path def clear_cookie(self,name,path="/",domain=None): """Deletes the cookie with the given name.""" expires = datetime.datetime.utcnow() - datetime.timedelta(days=365) self.set_cookie(name,value="",path=path,expires=expires, domain=domain) def clear_all_cookies(self): """Deletes all the cookies the user sent with this request.""" for name in self.cookies.iterkeys(): self.clear_cookie(name) def set_secure_cookie(self,name,value,expires_days=30,**kwargs): """Signs and timestamps a cookie so it cannot be forged""" timestamp = str(int(time.time())) value = base64.b64encode(value) signature = self._cookie_signature(name,value,timestamp) value = "|".join([value,timestamp,signature]) self.set_cookie(name,value,expires_days=expires_days,**kwargs) def get_secure_cookie(self,name,include_name=True,value=None): """Returns the given signed cookie if it validates,or None""" if value is None:value = self.get_cookie(name) if not value:return None parts = value.split("|") if len(parts) != 3:return None if include_name: signature = self._cookie_signature(name,parts[0],parts[1]) else: signature = self._cookie_signature(parts[0],parts[1]) if not _time_independent_equals(parts[2],signature): logging.warning("Invalid cookie signature %r",value) return None timestamp = int(parts[1]) if timestamp < time.time() - 31 * 86400: logging.warning("Expired cookie %r",value) return None try: return base64.b64decode(parts[0]) except: return None uid=1234|1234567890|d32b9e9c67274fa062e2599fd659cc14 Parts: 1. uid is the name of the key 2. 1234 is your value in clear 3. 1234567890 is the timestamp 4. d32b9e9c67274fa062e2599fd659cc14 is the signature made from the value and the timestamp

    Read the article

  • Why can't Java servlet sent out an object ?

    - by Frank
    I use the following method to send out an object from a servlet : public void doGet(HttpServletRequest request,HttpServletResponse response) throws IOException { String Full_URL=request.getRequestURL().append("?"+request.getQueryString()).toString(); String Contact_Id=request.getParameter("Contact_Id"); String Time_Stamp=Get_Date_Format(6),query="select from "+Contact_Info_Entry.class.getName()+" where Contact_Id == '"+Contact_Id+"' order by Contact_Id desc"; PersistenceManager pm=null; try { pm=PMF.get().getPersistenceManager(); // note that this returns a list, there could be multiple, DataStore does not ensure uniqueness for non-primary key fields List<Contact_Info_Entry> results=(List<Contact_Info_Entry>)pm.newQuery(query).execute(); Write_Serialized_XML(response.getOutputStream(),results.get(0)); } catch (Exception e) { Send_Email(Email_From,Email_To,"Check_License_Servlet Error [ "+Time_Stamp+" ]",new Text(e.toString()+"\n"+Get_Stack_Trace(e)),null); } finally { pm.close(); } } /** Writes the object and CLOSES the stream. Uses the persistance delegate registered in this class. * @param os The stream to write to. * @param o The object to be serialized. */ public static void writeXMLObject(OutputStream os,Object o) { // Classloader reference must be set since netBeans uses another class loader to loead the bean wich will fail in some circumstances. ClassLoader oldClassLoader=Thread.currentThread().getContextClassLoader(); Thread.currentThread().setContextClassLoader(Check_License_Servlet.class.getClassLoader()); XMLEncoder encoder=new XMLEncoder(os); encoder.setExceptionListener(new ExceptionListener() { public void exceptionThrown(Exception e) { e.printStackTrace(); }}); encoder.writeObject(o); encoder.flush(); encoder.close(); Thread.currentThread().setContextClassLoader(oldClassLoader); } private static ByteArrayOutputStream writeOutputStream=new ByteArrayOutputStream(16384); /** Writes an object to XML. * @param out The boject out to write to. [ Will not be closed. ] * @param o The object to write. */ public static synchronized void writeAsXML(ObjectOutput out,Object o) throws IOException { writeOutputStream.reset(); writeXMLObject(writeOutputStream,o); byte[] Bt_1=writeOutputStream.toByteArray(); byte[] Bt_2=new Des_Encrypter().encrypt(Bt_1,Key); out.writeInt(Bt_2.length); out.write(Bt_2); out.flush(); out.close(); } public static synchronized void Write_Serialized_XML(OutputStream Output_Stream,Object o) throws IOException { writeAsXML(new ObjectOutputStream(Output_Stream),o); } At the receiving end the code look like this : File_Url="http://"+Site_Url+App_Dir+File_Name; try { Contact_Info_Entry Online_Contact_Entry=(Contact_Info_Entry)Read_Serialized_XML(new URL(File_Url)); } catch (Exception e) { e.printStackTrace(); } private static byte[] readBuf=new byte[16384]; public static synchronized Object readAsXML(ObjectInput in) throws IOException { // Classloader reference must be set since netBeans uses another class loader to load the bean which will fail under some circumstances. ClassLoader oldClassLoader=Thread.currentThread().getContextClassLoader(); Thread.currentThread().setContextClassLoader(Tool_Lib_Simple.class.getClassLoader()); int length=in.readInt(); readBuf=new byte[length]; in.readFully(readBuf,0,length); byte Bt[]=new Des_Encrypter().decrypt(readBuf,Key); XMLDecoder dec=new XMLDecoder(new ByteArrayInputStream(Bt,0,Bt.length)); Object o=dec.readObject(); Thread.currentThread().setContextClassLoader(oldClassLoader); in.close(); return o; } public static synchronized Object Read_Serialized_XML(URL File_Url) throws IOException { return readAsXML(new ObjectInputStream(File_Url.openStream())); } But I can't get the object from the Java app that's on the receiving end, why ? The error messages look like this : java.lang.ClassNotFoundException: PayPal_Monitor.Contact_Info_Entry Continuing ... java.lang.NullPointerException: target should not be null Continuing ... java.lang.NullPointerException: target should not be null Continuing ... java.lang.NullPointerException: target should not be null Continuing ...

    Read the article

  • Grails, app-engine, jpa - beginner having trouble with grails generate-all

    - by John
    I'm trying to learn about grails with Google App Engine and JPA by following a few tutorials: http://www.morkeleb.com/2009/08/12/grails-and-google-appengine-beginners-guide/ http://inhouse32.appspot.com/index.html http://grails.org/plugin/app-engine I've got grails 1.3.0 RC 2, and App Engine SDK 1.3.3, and I'm using Windows 7. The steps that I try are: grails create-app appname cd appname grails install-plugin app-engine. I answer jpa when asked about jdo/jpa. It appears to install the gorm-jpa plugin automatically, although the tutorials all suggest installing gorm-jpa manually. grails install-plugin gorm-jpa (just in case) grails create-domain-class test.Person Edit the grails-app/domain/test/Person.groovy to add name and address fields: package test import javax.persistence.*; // import com.google.appengine.api.datastore.Key; @Entity class Person implements Serializable { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) Long id @Basic String name @Basic String address static constraints = { id visible:false } } grails generate-all test.Person I get errors during this final step: C:\Users\John\Workspaces\STS\appname>grails generate-all test.Person Welcome to Grails 1.3.0.RC2 - http://grails.org/ Licensed under Apache Standard License 2.0 Grails home is set to: C:\Users\John\Downloads\grails-1.3.0.RC2\grails-1.3.0.RC2 Base Directory: C:\Users\John\Workspaces\STS\appname Resolving dependencies... Dependencies resolved in 493ms. Running script C:\Users\John\Downloads\grails-1.3.0.RC2\grails-1.3.0.RC2\scripts\GenerateAll.groovy Environment set to development [copy] Copied 4 empty directories to 2 empty directories under C:\Users\John\.grails\1.3.0.RC2\projects\appname\resources [copy] Copied 4 empty directories to 2 empty directories under C:\Users\John\.grails\1.3.0.RC2\projects\appname\resources [copy] Copied 1 empty directory to 1 empty directory under C:\Users\John\.grails\1.3.0.RC2\projects\appname\resources [mkdir] Created dir: C:\Users\John\Workspaces\STS\appname\web-app\WEB-INF\classes [groovyc] Compiling 12 source files to C:\Users\John\Workspaces\STS\appname\web-app\WEB-INF\classes Note: C:\Users\John\.grails\1.3.0.RC2\projects\appname\plugins\gorm-jpa-0.7.1\src\java\org\grails\jpa\domain\JpaGrailsDomainClass.java uses or overrides a deprecated API. Note: Recompile with -Xlint:deprecation for details. Note: Some input files use unchecked or unsafe operations. Note: Recompile with -Xlint:unchecked for details. [groovyc] Compiling 8 source files to C:\Users\John\Workspaces\STS\appname\web-app\WEB-INF\classes [mkdir] Created dir: C:\Users\John\.grails\1.3.0.RC2\projects\appname\resources\grails-app\i18n [native2ascii] Converting 13 files from C:\Users\John\Workspaces\STS\appname\grails-app\i18n to C:\Users\John\.grails\1.3.0.RC2\projects\appname\resources\grails-app\i18n [mkdir] Created dir: C:\Users\John\.grails\1.3.0.RC2\projects\appname\resources\plugins\gorm-jpa-0.7.1\grails-app\i18n [mkdir] Created dir: C:\Users\John\.grails\1.3.0.RC2\projects\appname\resources\plugins\app-engine-0.8.10\grails-app\i18n [native2ascii] Converting 1 file from C:\Users\John\.grails\1.3.0.RC2\projects\appname\plugins\gorm-jpa-0.7.1\grails-app\i18n to C:\Users\John\.grails\1.3.0.RC2\projects\appname\resources\plugins\gorm -jpa-0.7.1\grails-app\i18n [native2ascii] Converting 1 file from C:\Users\John\.grails\1.3.0.RC2\projects\appname\plugins\app-engine-0.8.10\grails-app\i18n to C:\Users\John\.grails\1.3.0.RC2\projects\appname\resources\plugins\a pp-engine-0.8.10\grails-app\i18n [copy] Copying 1 file to C:\Users\John\Workspaces\STS\appname\web-app\WEB-INF\classes [copy] Copying 2 files to C:\Users\John\.grails\1.3.0.RC2\projects\appname\resources [copy] Copied 2 empty directories to 2 empty directories under C:\Users\John\.grails\1.3.0.RC2\projects\appname\resources [copy] Copying 1 file to C:\Users\John\.grails\1.3.0.RC2\projects\appname [mkdir] Created dir: C:\Users\John\Workspaces\STS\appname\web-app\plugins\app-engine-0.8.10 [copy] Copying 1 file to C:\Users\John\Workspaces\STS\appname\web-app\plugins\app-engine-0.8.10 [copy] Copying 1 file to C:\Users\John\Workspaces\STS\appname\web-app\WEB-INF [mkdir] Created dir: C:\Users\John\Workspaces\STS\appname\web-app\WEB-INF\lib [copy] Copying 64 files to C:\Users\John\Workspaces\STS\appname\web-app\WEB-INF\lib Configuring persistence for AppEngine [mkdir] Created dir: C:\Users\John\Workspaces\STS\appname\web-app\WEB-INF\classes\META-INF [copy] Copying 1 file to C:\Users\John\Workspaces\STS\appname\web-app\WEB-INF\classes\META-INF [mkdir] Created dir: C:\Users\John\Workspaces\STS\appname\web-app\WEB-INF\plugins\app-engine-0.8.10 [copy] Copying 2 files to C:\Users\John\Workspaces\STS\appname\web-app\WEB-INF\plugins\app-engine-0.8.10 [mkdir] Created dir: C:\Users\John\Workspaces\STS\appname\web-app\WEB-INF\plugins\gorm-jpa-0.7.1 [copy] Copying 2 files to C:\Users\John\Workspaces\STS\appname\web-app\WEB-INF\plugins\gorm-jpa-0.7.1 Packaging AppEngine jar files Enhancing JDO classes [enhance] DataNucleus Enhancer (version 1.1.4) : Enhancement of classes [enhance] DataNucleus Enhancer completed with success for 1 classes. Timings : input=589 ms, enhance=200 ms, total=789 ms. Consult the log for full details [groovyc] Compiling 1 source file to C:\Users\John\Workspaces\STS\appname\web-app\WEB-INF\classes [copy] Copying 1 file to C:\Users\John\.grails\1.3.0.RC2\projects\appname [copy] Copying 1 file to C:\Users\John\Workspaces\STS\appname\web-app\WEB-INF Configuring persistence for AppEngine Packaging AppEngine jar files Enhancing JDO classes [enhance] DataNucleus Enhancer (version 1.1.4) : Enhancement of classes [enhance] DataNucleus Enhancer completed with success for 1 classes. Timings : input=585 ms, enhance=28 ms, total=613 ms. Consult the log for full details Generating views for domain class test.Person ... java.lang.reflect.InvocationTargetException at SimpleTemplateScript1.run(SimpleTemplateScript1.groovy:43) at _GrailsGenerate_groovy.generateForDomainClass(_GrailsGenerate_groovy:85) at _GrailsGenerate_groovy$_run_closure1.doCall(_GrailsGenerate_groovy:50) at GenerateAll$_run_closure1.doCall(GenerateAll.groovy:42) at gant.Gant$_dispatch_closure5.doCall(Gant.groovy:381) at gant.Gant$_dispatch_closure7.doCall(Gant.groovy:415) at gant.Gant$_dispatch_closure7.doCall(Gant.groovy) at gant.Gant.withBuildListeners(Gant.groovy:427) at gant.Gant.this$2$withBuildListeners(Gant.groovy) at gant.Gant$this$2$withBuildListeners.callCurrent(Unknown Source) at gant.Gant.dispatch(Gant.groovy:415) at gant.Gant.this$2$dispatch(Gant.groovy) at gant.Gant.invokeMethod(Gant.groovy) at gant.Gant.executeTargets(Gant.groovy:590) at gant.Gant.executeTargets(Gant.groovy:589) Caused by: java.lang.NoClassDefFoundError: org/hibernate/MappingException ... 15 more Caused by: java.lang.ClassNotFoundException: org.hibernate.MappingException at org.codehaus.groovy.tools.RootLoader.findClass(RootLoader.java:156) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at org.codehaus.groovy.tools.RootLoader.loadClass(RootLoader.java:128) at java.lang.ClassLoader.loadClass(ClassLoader.java:248) ... 15 more Error running generate-all: null What am I doing wrong?

    Read the article

  • NHibernate: Collection was modified; enumeration operation may not execute

    - by Daoming Yang
    Hi All, I'm currently struggling with this "Collection was modified; enumeration operation may not execute" issue. I have searched about this error message, and it's all related to the foreach statement. I do have the some foreach statements, but they are just simply representing the data. I did not using any remove or add inside the foreach statement. NOTE: The error randomly happens (about 4-5 times a day). The application is the MVC website. There are about 5 users operate this applications (about 150 orders a day). Could it be some another users modified the collection, and then occur this error? I have log4net setup and the settings can be found here Make sure that the controller has a parameterless public constructor I do have parameterless public constructor in AdminProductController Does anyone know why this happen and how to resolve this issue? A friend (Oskar) mentioned that "Theory: Maybe the problem is that your configuration and session factory is initialized on the first request after application restart. If a second request comes in before the first request is finished, maybe it will also try to initialize and then triggering this problem somehow." Many thanks. Daoming Here is the error message: System.InvalidOperationException Collection was modified; enumeration operation may not execute. System.InvalidOperationException: An error occurred when trying to create a controller of type 'WebController.Controllers.Admin.AdminProductController'. Make sure that the controller has a parameterless public constructor. --- System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. --- NHibernate.MappingException: Could not configure datastore from input stream DomainModel.Entities.Mappings.OrderProductVariant.hbm.xml --- System.InvalidOperationException: Collection was modified; enumeration operation may not execute. at System.Collections.ArrayList.ArrayListEnumeratorSimple.MoveNext() at System.Xml.Schema.XmlSchemaSet.AddSchemaToSet(XmlSchema schema) at System.Xml.Schema.XmlSchemaSet.Add(String targetNamespace, XmlSchema schema) at System.Xml.Schema.XmlSchemaSet.Add(XmlSchema schema) at NHibernate.Cfg.Configuration.LoadMappingDocument(XmlReader hbmReader, String name) at NHibernate.Cfg.Configuration.AddInputStream(Stream xmlInputStream, String name) --- End of inner exception stack trace --- at NHibernate.Cfg.Configuration.LogAndThrow(Exception exception) at NHibernate.Cfg.Configuration.AddInputStream(Stream xmlInputStream, String name) at NHibernate.Cfg.Configuration.AddResource(String path, Assembly assembly) at NHibernate.Cfg.Configuration.AddAssembly(Assembly assembly) at DomainModel.RepositoryBase..ctor() at WebController.Controllers._baseController..ctor() at WebController.Controllers.Admin.AdminProductController..ctor() at System.RuntimeType.CreateInstanceImpl(Boolean publicOnly, Boolean skipVisibilityChecks, Boolean fillCache) --- End of inner exception stack trace --- at System.RuntimeType.CreateInstanceImpl(Boolean publicOnly, Boolean skipVisibilityChecks, Boolean fillCache) at System.Activator.CreateInstance(Type type, Boolean nonPublic) at System.Web.Mvc.DefaultControllerFactory.GetControllerInstance(RequestContext requestContext, Type controllerType) --- End of inner exception stack trace --- at System.Web.Mvc.DefaultControllerFactory.GetControllerInstance(RequestContext requestContext, Type controllerType) at System.Web.Mvc.DefaultControllerFactory.CreateController(RequestContext requestContext, String controllerName) at System.Web.Mvc.MvcHandler.ProcessRequestInit(HttpContextBase httpContext, IController& controller, IControllerFactory& factory) at System.Web.Mvc.MvcHandler.BeginProcessRequest(HttpContextBase httpContext, AsyncCallback callback, Object state) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) UPDATE CODE: In my Global.asax.cs, I'm doing this: protected void Application_BeginRequest(object sender, EventArgs e) { ManagedWebSessionContext.Bind(HttpContext.Current, SessionManager.SessionFactory.OpenSession()); } protected void Application_EndRequest(object sender, EventArgs e) { ISession session = ManagedWebSessionContext.Unbind(HttpContext.Current, SessionManager.SessionFactory); if (session != null) { try { if (session.Transaction != null && session.Transaction.IsActive) { session.Transaction.Rollback(); } else { session.Flush(); } } finally { session.Close(); } } } In the SessionManager class, I'm doing: public class SessionManager { private readonly ISessionFactory sessionFactory; public static ISessionFactory SessionFactory { get { return Instance.sessionFactory; } } private ISessionFactory GetSessionFactory() { return sessionFactory; } public static SessionManager Instance { get { return NestedSessionManager.sessionManager; } } public static ISession OpenSession() { return Instance.GetSessionFactory().OpenSession(); } public static ISession CurrentSession { get { return Instance.GetSessionFactory().GetCurrentSession(); } } private SessionManager() { Configuration config = new Configuration().Configure(); config.AddAssembly(Assembly.GetExecutingAssembly()); sessionFactory = config.BuildSessionFactory(); } class NestedSessionManager { internal static readonly SessionManager sessionManager = new SessionManager(); } } In the Repository, I'm doing this: public IEnumerable<User> GetAll() { ICriteria criteria = SessionManager.CurrentSession.CreateCriteria(typeof(User)); return criteria.List<User>(); } In the Controller, I'm doing this: public class UserController : _baseController { IUserRoleRepository _userRoleRepository; internal static readonly ILogger log = LogManager.GetLogger(typeof(UserController)); public UserController() { _userRoleRepository = new UserRoleRepository(); } public ActionResult UserList() { var myList = _usersRepository.GetAll(); return View(myList); } }

    Read the article

< Previous Page | 17 18 19 20 21 22  | Next Page >