Search Results

Search found 8397 results on 336 pages for 'implementation'.

Page 290/336 | < Previous Page | 286 287 288 289 290 291 292 293 294 295 296 297  | Next Page >

  • UIView with IrrlichtScene - iOS

    - by user1459024
    i have a UIViewController in a Storyboard and want to draw a IrrlichtScene in this View Controller. My Code: WWSViewController.h #import <UIKit/UIKit.h> @interface WWSViewController : UIViewController { IBOutlet UILabel *errorLabel; } @end WWSViewController.mm #import "WWSViewController.h" #include "../../ressources/irrlicht/include/irrlicht.h" using namespace irr; using namespace core; using namespace scene; using namespace video; using namespace io; using namespace gui; @interface WWSViewController () @end @implementation WWSViewController -(void)awakeFromNib { errorLabel = [[UILabel alloc] init]; errorLabel.text = @""; IrrlichtDevice *device = createDevice( video::EDT_OGLES1, dimension2d<u32>(640, 480), 16, false, false, false, 0); /* Set the caption of the window to some nice text. Note that there is an 'L' in front of the string. The Irrlicht Engine uses wide character strings when displaying text. */ device->setWindowCaption(L"Hello World! - Irrlicht Engine Demo"); /* Get a pointer to the VideoDriver, the SceneManager and the graphical user interface environment, so that we do not always have to write device->getVideoDriver(), device->getSceneManager(), or device->getGUIEnvironment(). */ IVideoDriver* driver = device->getVideoDriver(); ISceneManager* smgr = device->getSceneManager(); IGUIEnvironment* guienv = device->getGUIEnvironment(); /* We add a hello world label to the window, using the GUI environment. The text is placed at the position (10,10) as top left corner and (260,22) as lower right corner. */ guienv->addStaticText(L"Hello World! This is the Irrlicht Software renderer!", rect<s32>(10,10,260,22), true); /* To show something interesting, we load a Quake 2 model and display it. We only have to get the Mesh from the Scene Manager with getMesh() and add a SceneNode to display the mesh with addAnimatedMeshSceneNode(). We check the return value of getMesh() to become aware of loading problems and other errors. Instead of writing the filename sydney.md2, it would also be possible to load a Maya object file (.obj), a complete Quake3 map (.bsp) or any other supported file format. By the way, that cool Quake 2 model called sydney was modelled by Brian Collins. */ IAnimatedMesh* mesh = smgr->getMesh("/Users/dbocksteger/Desktop/test/media/sydney.md2"); if (!mesh) { device->drop(); if (!errorLabel) { errorLabel = [[UILabel alloc] init]; } errorLabel.text = @"Konnte Mesh nicht laden."; return; } IAnimatedMeshSceneNode* node = smgr->addAnimatedMeshSceneNode( mesh ); /* To let the mesh look a little bit nicer, we change its material. We disable lighting because we do not have a dynamic light in here, and the mesh would be totally black otherwise. Then we set the frame loop, such that the predefined STAND animation is used. And last, we apply a texture to the mesh. Without it the mesh would be drawn using only a color. */ if (node) { node->setMaterialFlag(EMF_LIGHTING, false); node->setMD2Animation(scene::EMAT_STAND); node->setMaterialTexture( 0, driver->getTexture("/Users/dbocksteger/Desktop/test/media/sydney.bmp") ); } /* To look at the mesh, we place a camera into 3d space at the position (0, 30, -40). The camera looks from there to (0,5,0), which is approximately the place where our md2 model is. */ smgr->addCameraSceneNode(0, vector3df(0,30,-40), vector3df(0,5,0)); /* Ok, now we have set up the scene, lets draw everything: We run the device in a while() loop, until the device does not want to run any more. This would be when the user closes the window or presses ALT+F4 (or whatever keycode closes a window). */ while(device->run()) { /* Anything can be drawn between a beginScene() and an endScene() call. The beginScene() call clears the screen with a color and the depth buffer, if desired. Then we let the Scene Manager and the GUI Environment draw their content. With the endScene() call everything is presented on the screen. */ driver->beginScene(true, true, SColor(255,100,101,140)); smgr->drawAll(); guienv->drawAll(); driver->endScene(); } /* After we are done with the render loop, we have to delete the Irrlicht Device created before with createDevice(). In the Irrlicht Engine, you have to delete all objects you created with a method or function which starts with 'create'. The object is simply deleted by calling ->drop(). See the documentation at irr::IReferenceCounted::drop() for more information. */ device->drop(); } - (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view, typically from a nib. } - (void)viewDidUnload { [super viewDidUnload]; // Release any retained subviews of the main view. } - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { return (interfaceOrientation != UIInterfaceOrientationPortraitUpsideDown); } @end Sadly the result is just a black View in the Simulator. :( Hope here is anyone who can explain me how i draw the scene in a UIView. Furthermore I'm getting this Error: Could not load sprite bank because the file does not exist: #DefaultFont How can i fix it ?

    Read the article

  • Proxied calls not working as expected

    - by AndyH
    I have been modifying an application to have a cleaner client/server split to allow for load splitting and resource sharing etc. Everything is written to an interface so it was easy to add a remoting layer to the interface using a proxy. Everything worked fine. The next phase was to add a caching layer to the interface and again this worked fine and speed was improved but not as much as I would have expected. On inspection it became very clear what was going on. I feel sure that this behavior has been seen many times before and there is probably a design pattern to solve the problem but it eludes me and I'm not even sure how to describe it. It is easiest explained with an example. Let's imagine the interface is interface IMyCode { List<IThing> getLots( List<String> ); IThing getOne( String id ); } The getLots() method calls getOne() and fills up the list before returning. The interface is implemented at the client which is proxied to a remoting client which then calls the remoting server which in turn calls the implementation at the server. At the client and the server layers there is also a cache. So we have :- Client interface | Client cache | Remote client | Remote server | Server cache | Server interface If we call getOne("A") at the client interface, the call is passed to the client cache which faults. This then calls the remote client which passes the call to the remote server. This then calls the server cache which also faults and so the call is eventually passed to the server interface which actually gets the IThing. In turn the server cache is filled and finally the client cache also. If getOne("A") is again called at the client interface the client cache has the data and it gets returned immediately. If a second client called getOne("B") it would fill the server cache with "B" as well as it's own client cache. Then, when the first client calls getOne("B") the client cache faults but the server cache has the data. This is all as one would expect and works well. Now lets call getLots( [ "C", "D" ] ). This works as you would expect by calling getOne() twice but there is a subtlety here. The call to getLots() cannot directly make use of the cache. Therefore the sequence is to call the client interface which in turn calls the remote client, then the remote server and eventually the server interface. This then calls getOne() to fill the list before returning. The problem is that the getOne() calls are being satisfied at the server when ideally they should be satisfied at the client. If you imagine that the client/server link is really slow then it becomes clear why the client call is more efficient than the server call once the client cache has the data. This example is contrived to illustrate the point. The more general problem is that you cannot just keep adding proxied layers to an interface and expect it to work as you would imagine. As soon as the call goes 'through' the proxy any subsequent calls are on the proxied side rather than 'self' side. Have I failed to learn or not learned something correctly? All this is implemented in Java and I haven't used EJBs. It seems that the example may be confusing. The problem is nothing to do with cache efficiencies. It is more to do with an illusion created by the use of proxies or AOP techniques in general. When you have an object whose class implements an interface there is an assumption that a call on that object might make further calls on that same object. For example, public String getInternalString() { return InetAddress.getLocalHost().toString(); } public String getString() { return getInternalString(); } If you get an object and call getString() the result depends where the code is running. If you add a remoting proxy to the class then the result could be different for calls to getString() and getInternalString() on the same object. This is because the initial call gets 'deproxied' before the actual method is called. I find this not only confusing but I wonder how I can control this behavior especially as the use of the proxy may be by a third party. The concept is fine but the practice is certainly not what I expected. Have I missed the point somewhere?

    Read the article

  • Developing Schema Compare for Oracle (Part 4): Script Configuration

    - by Simon Cooper
    If you've had a chance to play around with the Schema Compare for Oracle beta, you may have come across this screen in the synchronization wizard: This screen is one of the few screens that, along with the project configuration form, doesn't come from SQL Compare. This screen was designed to solve a couple of issues that, although aren't specific to Oracle, are much more of a problem than on SQL Server: Datatype conversions and NOT NULL columns. 1. Datatype conversions SQL Server is generally quite forgiving when it comes to datatype conversions using ALTER TABLE. For example, you can convert from a VARCHAR to INT using ALTER TABLE as long as all the character values are parsable as integers. Oracle, on the other hand, only allows ALTER TABLE conversions that don't change the internal data format. Essentially, every change that requires an actual datatype conversion has to be done using a rebuild with a conversion function. That's OK, as we can simply hard-code the various conversion functions for the valid datatype conversions and insert those into the rebuild SELECT list. However, as there always is with Oracle, there's a catch. Have a look at the NUMTODSINTERVAL function. As well as specifying the value (or column) to convert, you have to specify an interval_unit, which tells oracle how to interpret the input number. We can't hardcode a default for this parameter, as it is entirely dependent on the user's data context! So, in order to convert NUMBER to INTERVAL DAY TO SECOND/INTERVAL YEAR TO MONTH, we need to have feedback from the user as to what to put in this parameter while we're generating the sync script - this requires a new step in the engine action/script generation to insert these values into the script, as well as new UI to allow the user to specify these values in a sensible fashion. In implementing the engine and UI infrastructure to allow this it made much more sense to implement it for any rebuild datatype conversion, not just NUMBER to INTERVALs. For conversions which we can do, we pre-fill the 'value' box with the appropriate function from the documentation. The user can also type in arbitary SQL expressions, which allows the user to specify optional format parameters for the relevant conversion functions, or indeed call their own functions to convert between values that don't have a built-in conversion defined. As the value gets inserted as-is into the rebuild SELECT list, any expression that is valid in that context can be specified as the conversion value. 2. NOT NULL columns Another problem that is solved by the new step in the sync wizard is adding a NOT NULL column to a table. If the table contains data (as most database tables do), you can't just add a NOT NULL column, as Oracle doesn't know what value to put in the new column for existing rows - the DDL statement will fail. There are actually 3 separate scenarios for this problem that have separate solutions within the engine: Adding a NOT NULL column to a table without a rebuild Here, the workaround is to add a column default with an appropriate value to the column you're adding: ALTER TABLE tbl1 ADD newcol NUMBER DEFAULT <value> NOT NULL; Note, however, there is something to bear in mind about this solution; once specified on a column, a default cannot be removed. To 'remove' a default from a column you change it to have a default of NULL, hence there's code in the engine to treat a NULL default the same as no default at all. Adding a NOT NULL column to a table, where a separate change forced a table rebuild Fortunately, in this case, a column default is not required - we can simply insert the default value into the rebuild SELECT clause. Changing an existing NULL to a NOT NULL column To implement this, we run an UPDATE command before the ALTER TABLE to change all the NULLs in the column to the required default value. For all three, we need some way of allowing the user to specify a default value to use instead of NULL; as this is essentially the same problem as datatype conversion (inserting values into the sync script), we can re-use the UI and engine implementation of datatype conversion values. We also provide the option to alter the new column to allow NULLs, or to ignore the problem completely. Note that there is the same (long-running) problem in SQL Compare, but it is much more of an issue in Oracle as you cannot easily roll back executed DDL statements if the script fails at some point during execution. Furthermore, the engine of SQL Compare is far less conducive to inserting user-supplied values into the generated script. As we're writing the Schema Compare engine from scratch, we used what we learnt from the SQL Compare engine and designed it to be far more modular, which makes inserting procedures like this much easier.

    Read the article

  • Cloud Computing Forces Better Design Practices

    - by Herve Roggero
    Is cloud computing simply different than on premise development, or is cloud computing actually forcing you to create better applications than you normally would? In other words, is cloud computing merely imposing different design principles, or forcing better design principles?  A little while back I got into a discussion with a developer in which I was arguing that cloud computing, and specifically Windows Azure in his case, was forcing developers to adopt better design principles. His opinion was that cloud computing was not yielding better systems; just different systems. In this blog, I will argue that cloud computing does force developers to use better design practices, and hence better applications. So the first thing to define, of course, is the word “better”, in the context of application development. Looking at a few definitions online, better means “superior quality”. As it relates to this discussion then, I stipulate that cloud computing can yield higher quality applications in terms of scalability, everything else being equal. Before going further I need to also outline the difference between performance and scalability. Performance and scalability are two related concepts, but they don’t mean the same thing. Scalability is the measure of system performance given various loads. So when developers design for performance, they usually give higher priority to a given load and tend to optimize for the given load. When developers design for scalability, the actual performance at a given load is not as important; the ability to ensure reasonable performance regardless of the load becomes the objective. This can lead to very different design choices. For example, if your objective is to obtains the fastest response time possible for a service you are building, you may choose the implement a TCP connection that never closes until the client chooses to close the connection (in other words, a tightly coupled service from a connectivity standpoint), and on which a connection session is established for faster processing on the next request (like SQL Server or other database systems for example). If you objective is to scale, you may implement a service that answers to requests without keeping session state, so that server resources are released as quickly as possible, like a REST service for example. This alternate design would likely have a slower response time than the TCP service for any given load, but would continue to function at very large loads because of its inherently loosely coupled design. An example of a REST service is the NO-SQL implementation in the Microsoft cloud called Azure Tables. Now, back to cloud computing… Cloud computing is designed to help you scale your applications, specifically when you use Platform as a Service (PaaS) offerings. However it’s not automatic. You can design a tightly-coupled TCP service as discussed above, and as you can imagine, it probably won’t scale even if you place the service in the cloud because it isn’t using a connection pattern that will allow it to scale [note: I am not implying that all TCP systems do not scale; I am just illustrating the scalability concepts with an imaginary TCP service that isn’t designed to scale for the purpose of this discussion]. The other service, using REST, will have a better chance to scale because, by design, it minimizes resource consumption for individual requests and doesn’t tie a client connection to a specific endpoint (which means you can easily deploy this service to hundreds of machines without much trouble, as long as your pockets are deep enough). The TCP and REST services discussed above are both valid designs; the TCP service is faster and the REST service scales better. So is it fair to say that one service is fundamentally better than the other? No; not unless you need to scale. And if you don’t need to scale, then you don’t need the cloud in the first place. However, it is interesting to note that if you do need to scale, then a loosely coupled system becomes a better design because it can almost always scale better than a tightly-coupled system. And because most applications grow overtime, with an increasing user base, new functional requirements, increased data and so forth, most applications eventually do need to scale. So in my humble opinion, I conclude that a loosely coupled system is not just different than a tightly coupled system; it is a better design, because it will stand the test of time. And in my book, if a system stands the test of time better than another, it is of superior quality. Because cloud computing demands loosely coupled systems so that its underlying service architecture can be leveraged, developers ultimately have no choice but to design loosely coupled systems for the cloud. And because loosely coupled systems are better… … the cloud forces better design practices. My 2 cents.

    Read the article

  • LINQ and ArcObjects

    - by Marko Apfel
    Motivation LINQ (language integrated query) is a component of the Microsoft. NET Framework since version 3.5. It allows a SQL-like query to various data sources such as SQL, XML etc. Like SQL also LINQ to SQL provides a declarative notation of problem solving – i.e. you don’t need describe in detail how a task could be solved, you describe what to be solved at all. This frees the developer from error-prone iterator constructs. Ideally, of course, would be to access features with this way. Then this construct is conceivable: var largeFeatures = from feature in features where (feature.GetValue("SHAPE_Area").ToDouble() > 3000) select feature; or its equivalent as a lambda expression: var largeFeatures = features.Where(feature => (feature.GetValue("SHAPE_Area").ToDouble() > 3000)); This requires an appropriate provider, which manages the corresponding iterator logic. This is easier than you might think at first sight - you have to deliver only the desired entities as IEnumerable<IFeature>. LINQ automatically establishes a state machine in the background, whose execution is delayed (deferred execution) - when you are really request entities (foreach, Count (), ToList (), ..) an instantiation processing takes place, although it was already created at a completely different place. Especially in multiple iteration through entities in the first debuggings you are rubbing your eyes when the execution pointer jumps magically back in the iterator logic. Realization A very concise logic for constructing IEnumerable<IFeature> can be achieved by running through a IFeatureCursor. You return each feature via yield. For an easier usage I have put the logic in an extension method Getfeatures() for IFeatureClass: public static IEnumerable<IFeature> GetFeatures(this IFeatureClass featureClass, IQueryFilter queryFilter, RecyclingPolicy policy) { IFeatureCursor featureCursor = featureClass.Search(queryFilter, RecyclingPolicy.Recycle == policy); IFeature feature; while (null != (feature = featureCursor.NextFeature())) { yield return feature; } //this is skipped in unit tests with cursor-mock if (Marshal.IsComObject(featureCursor)) { Marshal.ReleaseComObject(featureCursor); } } So you can now easily generate the IEnumerable<IFeature>: IEnumerable<IFeature> features = _featureClass.GetFeatures(RecyclingPolicy.DoNotRecycle); You have to be careful with the recycling cursor. After a delayed execution in the same context it is not a good idea to re-iterated on the features. In this case only the content of the last (recycled) features is provided and all the features are the same in the second set. Therefore, this expression would be critical: largeFeatures.ToList(). ForEach(feature => Debug.WriteLine(feature.OID)); because ToList() iterates once through the list and so the the cursor was once moved through the features. So the extension method ForEach() always delivers the same feature. In such situations, you must not use a recycling cursor. Repeated executions of ForEach() is not a problem, because for every time the state machine is re-instantiated and thus the cursor runs again - that's the magic already mentioned above. Perspective Now you can also go one step further and realize your own implementation for the interface IEnumerable<IFeature>. This requires that only the method and property to access the enumerator have to be programmed. In the enumerator himself in the Reset() method you organize the re-executing of the search. This could be archived with an appropriate delegate in the constructor: new FeatureEnumerator<IFeatureclass>(_featureClass, featureClass => featureClass.Search(_filter, isRecyclingCursor)); which is called in Reset(): public void Reset() { _featureCursor = _resetCursor(_t); } In this manner, enumerators for completely different scenarios could be implemented, which are used on the client side completely identical like described above. Thus cursors, selection sets, etc. merge into a single matter and the reusability of code is increasing immensely. On top of that in automated unit tests an IEnumerable could be mocked very easily - a major step towards better software quality. Conclusion Nevertheless, caution should be exercised with these constructs in performance-relevant queries. Because of managing a state machine in the background, a lot of overhead is created. The processing costs additional time - about 20 to 100 percent. In addition, working without a recycling cursor is fast a performance gap. However declarative LINQ code is much more elegant, flawless and easy to maintain than manually iterating, compare and establish a list of results. The code size is reduced according to experience an average of 75 to 90 percent! So I like to wait a few milliseconds longer. As so often it has to be balanced between maintainability and performance - which for me is gaining in priority maintainability. In times of multi-core processors, the processing time of most business processes is anyway not dominated by code execution but by waiting for user input. Demo source code The source code for this prototype with several unit tests, you can download here: https://github.com/esride-apf/Linq2ArcObjects. .

    Read the article

  • UK OUG Conference Highlights and Insights

    - by Richard Bingham
    As per my preemptive post, this was the first time the annual conference organized by the UK Oracle User Group (UKOUG) was split into two events, one for Oracle Applications and another in December for Oracle Technology. Apps13, as it was branded, was hailed as a success, with over 1000 registered attendees and three days of sessions, exhibition, round-tables and many other types of content. As this poster on their stand illustrates, the UKOUG is a strong community with popular participants from both big and small Oracle partners and customers. The venue was a more intimate setting than previous years also, allowing everyone to casually bump into those they hoped to. It gave a real feeling of an Apps Community. The main themes over the days where CRM and Customer Experience, HCM, and FIN/SCM. This allowed people to attend just one focused day if they wanted. In addition the Apps Transformation stream ran across all three days, offering insights, advice, and details on the newer product solutions like Fusion Applications.  Here are some of the key take-aways I got from the conference, specific to my role in Fusion Applications Developer Relations: User Experience continues to be a significant reason for adopting some of the newer application products available, with immediately obvious gains in user productivity and satisfaction reported by customers. Also this doesn't stop with the baked-in UX either, with their Design Patterns proving popular and indeed currently being extended to including things like extending on ADF mobile and customizing the Simplified UI. More on this to come from us soon. The executive sessions emphasized the "it's a journey" phrase, illustrating that modern business applications are powered by technologies such as Cloud, Mobile, Social and Big Data and these can be harnessed to help propel your organization forward. Indeed the emphasis is away from the traditional vendor prescribed linear applications road map, and towards plotting a course based on business priorities supported by a broad range of integrated solutions. To help with this several conference sessions demoed the new "Applications Navigator" tool, developed in partnership with OUG members, which offers a visual framework to help organizations plan their Oracle Applications investments around business and technology imperatives. Initial reaction was positive, especially as customers do not need to decipher Oracle's huge product catalog and embeds the best blend of proven and integrated applications solutions. We'll share more on this when it is generally available. Several sessions focused around explanations and interpretation of Oracle OpenWorld 2013, helping highlight the key Oracle Applications messages and directions. With a relative small percentage of conference attendees also at OpenWorld (from a show of hands) this was a popular way to distill the information available down into specific items of interest for the community. Please note the original OpenWorld 2013 content is still available for download but will not remain available forever (via the Oracle website OpenWorld Content Catalog > pick a session > see the PDF download). With the release of E-Business Suite 12.2 the move to develop and deploy on the Fusion Middleware stack becomes a reality for many Oracle Applications customers. This coupled with recent E-Business Suite features such as the Integrated SOA Gateway and the E-Business Suite SDK for Java, illustrates how the gap between the technologies and techniques involved in extending E-Business Suite and Fusion Applications is quickly narrowing. We'll see this merging continue to evolve going forwards. Getting started with Oracle Cloud Applications is actually easier than many customers expected, with a broad selection of both large and medium sized organizations explaining how they added new features to their existing Oracle Applications portfolios. New functionality available from Fusion HCM and CX are popular extensions that do not have to disrupt those core business services. Coexistence is the buzzword here, and the available integration is also simpler than many expected, commonly involving an initial setup data load, then regularly incremental synchronizations, often without a need for real-time constant communication between systems. With much of this pre-built already the implementation process is also quite rapid. With most people dressed in suits, we wanted to get the conversations going without the traditional english reserve, so we decided to make ourselves a bit more obvious, as the photo below shows. This seemed to be quite successful and helped those interested identify and approach us. Keep a look out for similar again. In fact if you're in the UK there is an "Apps Transformation Day" planned by the UKOUG for the 19th March 2014, with more details to follow. Again something we'll be sure to participate in. I am hoping to attend the next half of the UKOUG annual conference, Tech13, that focuses more on Oracle technology and where there is more likely to be larger attendance of those interested in the lower-level aspects of applications customization and development. If you're going, let me know and maybe we can meet up.

    Read the article

  • Concurrency Utilities for Java EE Early Draft (JSR 236)

    - by arungupta
    Concurrency Utilities for Java EE is being worked as JSR 236 and has released an Early Draft. It provides concurrency capabilities to Java EE application components without compromising container integrity. Simple (common) and advanced concurrency patterns are easily supported without sacrificing usability. Using Java SE concurrency utilities such as java.util.concurrent API, java.lang.Thread and java.util.Timer in a Java EE application component such as EJB or Servlet are problematic since the container and server have no knowledge of these resources. JSR 236 enables concurrency largely by extending the Concurrency Utilities API developed under JSR-166. This also allows a consistency between Java SE and Java EE concurrency programming model. There are four main programming interfaces available: ManagedExecutorService ManagedScheduledExecutorService ContextService ManagedThreadFactory ManagedExecutorService is a managed version of java.util.concurrent.ExecutorService. The implementations of this interface are provided by the container and accessible using JNDI reference: <resource-env-ref>  <resource-env-ref-name>    concurrent/BatchExecutor  </resource-env-ref-name>  <resource-env-ref-type>    javax.enterprise.concurrent.ManagedExecutorService  </resource-env-ref-type><resource-env-ref> and available as: @Resource(name="concurrent/BatchExecutor")ManagedExecutorService executor; Its recommended to bind the JNDI references in the java:comp/env/concurrent subcontext. The asynchronous tasks that need to be executed need to implement java.lang.Runnable or java.util.concurrent.Callable interface as: public class MyTask implements Runnable { public void run() { // business logic goes here }} OR public class MyTask2 implements Callable<Date> {  public Date call() { // business logic goes here   }} The task is then submitted to the executor using one of the submit method that return a Future instance. The Future represents the result of the task and can also be used to check if the task is complete or wait for its completion. Future<String> future = executor.submit(new MyTask(), String.class);. . .String result = future.get(); Another example to submit tasks is: class MyTask implements Callback<Long> { . . . }class MyTask2 implements Callback<Date> { . . . }ArrayList<Callable> tasks = new ArrayList<();tasks.add(new MyTask());tasks.add(new MyTask2());List<Future<Object>> result = executor.invokeAll(tasks); The ManagedExecutorService may be configured for different properties such as: Hung Task Threshold: Time in milliseconds that a task can execute before it is considered hung Pool Info Core Size: Number of threads to keep alive Maximum Size: Maximum number of threads allowed in the pool Keep Alive: Time to allow threads to remain idle when # of threads > Core Size Work Queue Capacity: # of tasks that can be stored in inbound buffer Thread Use: Application intend to run short vs long-running tasks, accordingly pooled or daemon threads are picked ManagedScheduledExecutorService adds delay and periodic task running capabilities to ManagedExecutorService. The implementations of this interface are provided by the container and accessible using JNDI reference: <resource-env-ref>  <resource-env-ref-name>    concurrent/BatchExecutor  </resource-env-ref-name>  <resource-env-ref-type>    javax.enterprise.concurrent.ManagedExecutorService  </resource-env-ref-type><resource-env-ref> and available as: @Resource(name="concurrent/timedExecutor")ManagedExecutorService executor; And then the tasks are submitted using submit, invokeXXX or scheduleXXX methods. ScheduledFuture<?> future = executor.schedule(new MyTask(), 5, TimeUnit.SECONDS); This will create and execute a one-shot action that becomes enabled after 5 seconds of delay. More control is possible using one of the newly added methods: MyTaskListener implements ManagedTaskListener {  public void taskStarting(...) { . . . }  public void taskSubmitted(...) { . . . }  public void taskDone(...) { . . . }  public void taskAborted(...) { . . . } }ScheduledFuture<?> future = executor.schedule(new MyTask(), 5, TimeUnit.SECONDS, new MyTaskListener()); Here, ManagedTaskListener is used to monitor the state of a task's future. ManagedThreadFactory provides a method for creating threads for execution in a managed environment. A simple usage is: @Resource(name="concurrent/myThreadFactory")ManagedThreadFactory factory;. . .Thread thread = factory.newThread(new Runnable() { . . . }); concurrent/myThreadFactory is a JNDI resource. There is lot of interesting content in the Early Draft, download it, and read yourself. The implementation will be made available soon and also be integrated in GlassFish 4 as well. Some references for further exploring ... Javadoc Early Draft Specification concurrency-ee-spec.java.net [email protected]

    Read the article

  • Increasing efficiency of N-Body gravity simulation

    - by Postman
    I'm making a space exploration type game, it will have many planets and other objects that will all have realistic gravity. I currently have a system in place that works, but if the number of planets goes above 70, the FPS decreases an practically exponential rates. I'm making it in C# and XNA. My guess is that I should be able to do gravity calculations between 100 objects without this kind of strain, so clearly my method is not as efficient as it should be. I have two files, Gravity.cs and EntityEngine.cs. Gravity manages JUST the gravity calculations, EntityEngine creates an instance of Gravity and runs it, along with other entity related methods. EntityEngine.cs public void Update() { foreach (KeyValuePair<string, Entity> e in Entities) { e.Value.Update(); } gravity.Update(); } (Only relevant piece of code from EntityEngine, self explanatory. When an instance of Gravity is made in entityEngine, it passes itself (this) into it, so that gravity can have access to entityEngine.Entities (a dictionary of all planet objects)) Gravity.cs namespace ExplorationEngine { public class Gravity { private EntityEngine entityEngine; private Vector2 Force; private Vector2 VecForce; private float distance; private float mult; public Gravity(EntityEngine e) { entityEngine = e; } public void Update() { //First loop foreach (KeyValuePair<string, Entity> e in entityEngine.Entities) { //Reset the force vector Force = new Vector2(); //Second loop foreach (KeyValuePair<string, Entity> e2 in entityEngine.Entities) { //Make sure the second value is not the current value from the first loop if (e2.Value != e.Value ) { //Find the distance between the two objects. Because Fg = G * ((M1 * M2) / r^2), using Vector2.Distance() and then squaring it //is pointless and inefficient because distance uses a sqrt, squaring the result simple cancels that sqrt. distance = Vector2.DistanceSquared(e2.Value.Position, e.Value.Position); //This makes sure that two planets do not attract eachother if they are touching, completely unnecessary when I add collision, //For now it just makes it so that the planets are not glitchy, performance is not significantly improved by removing this IF if (Math.Sqrt(distance) > (e.Value.Texture.Width / 2 + e2.Value.Texture.Width / 2)) { //Calculate the magnitude of Fg (I'm using my own gravitational constant (G) for the sake of time (I know it's 1 at the moment, but I've been changing it) mult = 1.0f * ((e.Value.Mass * e2.Value.Mass) / distance); //Calculate the direction of the force, simply subtracting the positions and normalizing works, this fixes diagonal vectors //from having a larger value, and basically makes VecForce a direction. VecForce = e2.Value.Position - e.Value.Position; VecForce.Normalize(); //Add the vector for each planet in the second loop to a force var. Force = Vector2.Add(Force, VecForce * mult); //I have tried Force += VecForce * mult, and have not noticed much of an increase in speed. } } } //Add that force to the first loop's planet's position (later on I'll instead add to acceleration, to account for inertia) e.Value.Position += Force; } } } } I have used various tips (about gravity optimizing, not threading) from THIS question (that I made yesterday). I've made this gravity method (Gravity.Update) as efficient as I know how to make it. This O(N^2) algorithm still seems to be eating up all of my CPU power though. Here is a LINK (google drive, go to File download, keep .Exe with the content folder, you will need XNA Framework 4.0 Redist. if you don't already have it) to the current version of my game. Left click makes a planet, right click removes the last planet. Mouse moves the camera, scroll wheel zooms in and out. Watch the FPS and Planet Count to see what I mean about performance issues past 70 planets. (ALL 70 planets must be moving, I've had 100 stationary planets and only 5 or so moving ones while still having 300 fps, the issue arises when 70+ are moving around) After 70 planets are made, performance tanks exponentially. With < 70 planets, I get 330 fps (I have it capped at 300). At 90 planets, the FPS is about 2, more than that and it sticks around at 0 FPS. Strangely enough, when all planets are stationary, the FPS climbs back up to around 300, but as soon as something moves, it goes right back down to what it was, I have no systems in place to make this happen, it just does. I considered multithreading, but that previous question I asked taught me a thing or two, and I see now that that's not a viable option. I've also thought maybe I could do the calculations on my GPU instead, though I don't think it should be necessary. I also do not know how to do this, it is not a simple concept and I want to avoid it unless someone knows a really noob friendly simple way to do it that will work for an n-body gravity calculation. (I have an NVidia gtx 660) Lastly I've considered using a quadtree type system. (Barnes Hut simulation) I've been told (in the previous question) that this is a good method that is commonly used, and it seems logical and straightforward, however the implementation is way over my head and I haven't found a good tutorial for C# yet that explains it in a way I can understand, or uses code I can eventually figure out. So my question is this: How can I make my gravity method more efficient, allowing me to use more than 100 objects (I can render 1000 planets with constant 300+ FPS without gravity calculations), and if I can't do much to improve performance (including some kind of quadtree system), could I use my GPU to do the calculations?

    Read the article

  • Database Partitioning and Multiple Data Source Considerations

    - by Jeffrey McDaniel
    With the release of P6 Reporting Database 3.0 partitioning was added as a feature to help with performance and data management.  Careful investigation of requirements should be conducting prior to installation to help improve overall performance throughout the lifecycle of the data warehouse, preventing future maintenance that would result in data loss. Before installation try to determine how many data sources and partitions will be required along with the ranges.  In P6 Reporting Database 3.0 any adjustments outside of defaults must be made in the scripts and changes will require new ETL runs for each data source.  Considerations: 1. Standard Edition or Enterprise Edition of Oracle Database.   If you aren't using Oracle Enterprise Edition Database; the partitioning feature is not available. Multiple Data sources are only supported on Enterprise Edition of Oracle   Database. 2. Number of Data source Ids for partitioning during configuration.   This setting will specify how many partitions will be allocated for tables containing data source information.  This setting requires some evaluation prior to installation as       there are repercussions if you don't estimate correctly.   For example, if you configured the software for only 2 data sources and the partition setting was set to 2, however along came a 3rd data source.  The necessary steps to  accommodate this change are as follows: a) By default, 3 partitions are configured in the Reporting Database scripts. Edit the create_star_tables_part.sql script located in <installation directory>\star\scripts   and search for partition.  You’ll see P1, P2, P3.  Add additional partitions and sub-partitions for P4 and so on. These will appear in several areas.  (See P6 Reporting Database 3.0 Installation and Configuration guide for more information on this and how to adjust partition ranges). b) Run starETL -r.  This will recreate each table with the new partition key.  The effect of this step is that all tables data will be lost except for history related tables.   c) Run starETL for each of the 3 data sources (with the data source # (starETL.bat "-s2" -as defined in P6 Reporting Database 3.0 Installation and Configuration guide) The best strategy for this setting is to overestimate based on possible growth.  If during implementation it is deemed that there are atleast 2 data sources with possibility for growth, it is a better idea to set this setting to 4 or 5, allowing room for the future and preventing a ‘start over’ scenario. 3. The Number of Partitions and the Number of Months per Partitions are not specific to multi-data source.  These settings work in accordance to a sub partition of larger tables with regard to time related data.  These settings are dataset specific for optimization.  The number of months per partition is self explanatory, optimally the smaller the partition, the better query performance so if the dataset has an extremely large number of spread/history records, a lower number of months is optimal.  Working in accordance with this setting is the number of partitions, this will determine how many "buckets" will be created per the number of months setting.  For example, if you kept the default for # of partitions of 3, and select 2 months for each partitions you would end up with: -1st partition, 2 months -2nd partition, 2 months -3rd partition, all the remaining records Therefore with records to this setting, it is important to analyze your source db spread ranges and history settings when determining the proper number of months per partition and number of partitions to optimize performance.  Also be aware the DBA will need to monitor when these partition ranges will fill up and when additional partitions will need to be added.  If you get to the final range partition and there are no additional range partitions all data will be included into the last partition. 

    Read the article

  • Is software support an option for your career?

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 If you have a technical background, why should you choose a career in support? We have invited Serban to answer these questions and to give us an overview of one of the biggest technical teams in Oracle Romania. He’s been with Oracle for 7 years leading the local PeopleSoft Financials & Supply Chain Support team. Back in 2013 Serban started building a new support team in Romania – Fusion HCM. His current focus is building a strong support team for Fusion HCM, latest solution for Business HR Professionals from Oracle. The solution is offered both on Premise (customer site installation) but more important as a Cloud offering – SaaS.  So, why should a technical person choose Software Support over other technical areas?  “I think it is mainly because of the high level of technical skills required to provide the best technical solutions to our customers. Oracle Software Support covers complex solutions going from Database or Middleware to a vast area of business applications (basically covering any needs that a large enterprise may have). Working with such software requires very strong skills both technical and functional for the different areas, going from Finance, Supply Chain Management, Manufacturing, Sales to other very specific business processes. Our customers are large enterprises that already have a support layer inside their organization and therefore the Oracle Technical Support Engineers are working with highly specialized staff (DBA’s, System/Application Admins, Implementation Consultants). This is a very important aspect for our engineers because they need to be highly skilled to match our customer’s specialist’s expectations”.  What’s the career path in your team? “Technical Analysts joining our teams have a clear growth path. The main focus is to become a master of the product they will support. I think one need 1 or 2 years to reach a good level of understanding the product and delivering optimal solutions because of the complexity of our products. At a later stage, engineers can choose their professional development areas based on the business needs and preferences and then further grow towards as technical expert or a management role. We have analysts that have more than 15 years of technical expertise and they still learn and grow in technical area. Important fact is, due to the expansion of the Romanian Software support center, there are various management opportunities. So, if you want to leverage your experience and if you want to have people management responsibilities Oracle Software Support is the place to be!”  Our last question to Serban was about the benefits of being part of Oracle Software Support. Here is what he said: “We believe that Oracle delivers “State of the art” Support level to our customers. This is not possible without high investment in our staff. We commit from the start to support any technical analyst that joins us (being junior or very senior) with any training needs they have for their job. We have various technical trainings as well as soft-skills trainings required for a customer facing professional to be successful in his role. Last but not least, we’re aiming to make Oracle Romania SW Support a global center of excellence which means we’re investing a lot in our employees.”  If you’re looking for a job where you can combine your strong technical skills with customer interaction Oracle Software Support is the place to be! Send us your CV at [email protected]. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Is it feasible and useful to auto-generate some code of unit tests?

    - by skiwi
    Earlier today I have come up with an idea, based upon a particular real use case, which I would want to have checked for feasability and usefulness. This question will feature a fair chunk of Java code, but can be applied to all languages running inside a VM, and maybe even outside. While there is real code, it uses nothing language-specific, so please read it mostly as pseudo code. The idea Make unit testing less cumbersome by adding in some ways to autogenerate code based on human interaction with the codebase. I understand this goes against the principle of TDD, but I don't think anyone ever proved that doing TDD is better over first creating code and then immediatly therafter the tests. This may even be adapted to be fit into TDD, but that is not my current goal. To show how it is intended to be used, I'll copy one of my classes here, for which I need to make unit tests. public class PutMonsterOnFieldAction implements PlayerAction { private final int handCardIndex; private final int fieldMonsterIndex; public PutMonsterOnFieldAction(final int handCardIndex, final int fieldMonsterIndex) { this.handCardIndex = Arguments.requirePositiveOrZero(handCardIndex, "handCardIndex"); this.fieldMonsterIndex = Arguments.requirePositiveOrZero(fieldMonsterIndex, "fieldCardIndex"); } @Override public boolean isActionAllowed(final Player player) { Objects.requireNonNull(player, "player"); Hand hand = player.getHand(); Field field = player.getField(); if (handCardIndex >= hand.getCapacity()) { return false; } if (fieldMonsterIndex >= field.getMonsterCapacity()) { return false; } if (field.hasMonster(fieldMonsterIndex)) { return false; } if (!(hand.get(handCardIndex) instanceof MonsterCard)) { return false; } return true; } @Override public void performAction(final Player player) { Objects.requireNonNull(player); if (!isActionAllowed(player)) { throw new PlayerActionNotAllowedException(); } Hand hand = player.getHand(); Field field = player.getField(); field.setMonster(fieldMonsterIndex, (MonsterCard)hand.play(handCardIndex)); } } We can observe the need for the following tests: Constructor test with valid input Constructor test with invalid inputs isActionAllowed test with valid input isActionAllowed test with invalid inputs performAction test with valid input performAction test with invalid inputs My idea mainly focuses on the isActionAllowed test with invalid inputs. Writing these tests is not fun, you need to ensure a number of conditions and you check whether it really returns false, this can be extended to performAction, where an exception needs to be thrown in that case. The goal of my idea is to generate those tests, by indicating (through GUI of IDE hopefully) that you want to generate tests based on a specific branch. The implementation by example User clicks on "Generate code for branch if (handCardIndex >= hand.getCapacity())". Now the tool needs to find a case where that holds. (I haven't added the relevant code as that may clutter the post ultimately) To invalidate the branch, the tool needs to find a handCardIndex and hand.getCapacity() such that the condition >= holds. It needs to construct a Player with a Hand that has a capacity of at least 1. It notices that the capacity private int of Hand needs to be at least 1. It searches for ways to set it to 1. Fortunately it finds a constructor that takes the capacity as an argument. It uses 1 for this. Some more work needs to be done to succesfully construct a Player instance, involving the creation of objects that have constraints that can be seen by inspecting the source code. It has found the hand with the least capacity possible and is able to construct it. Now to invalidate the test it will need to set handCardIndex = 1. It constructs the test and asserts it to be false (the returned value of the branch) What does the tool need to work? In order to function properly, it will need the ability to scan through all source code (including JDK code) to figure out all constraints. Optionally this could be done through the javadoc, but that is not always used to indicate all constraints. It could also do some trial and error, but it pretty much stops if you cannot attach source code to compiled classes. Then it needs some basic knowledge of what the primitive types are, including arrays. And it needs to be able to construct some form of "modification trees". The tool knows that it needs to change a certain variable to a different value in order to get the correct testcase. Hence it will need to list all possible ways to change it, without using reflection obviously. What this tool will not replace is the need to create tailored unit tests that tests all kinds of conditions when a certain method actually works. It is purely to be used to test methods when they invalidate constraints. My questions: Is creating such a tool feasible? Would it ever work, or are there some obvious problems? Would such a tool be useful? Is it even useful to automatically generate these testcases at all? Could it be extended to do even more useful things? Does, by chance, such a project already exist and would I be reinventing the wheel? If not proven useful, but still possible to make such thing, I will still consider it for fun. If it's considered useful, then I might make an open source project for it depending on the time. For people searching more background information about the used Player and Hand classes in my example, please refer to this repository. At the time of writing the PutMonsterOnFieldAction has not been uploaded to the repo yet, but this will be done once I'm done with the unit tests.

    Read the article

  • Deduping your redundancies

    - by nospam(at)example.com (Joerg Moellenkamp)
    Robin Harris of Storagemojo pointed to an interesting article about about deduplication and it's impact to the resiliency of your data against data corruption on ACM Queue. The problem in short: A considerable number of filesystems store important metadata at multiple locations. For example the ZFS rootblock is copied to three locations. Other filesystems have similar provisions to protect their metadata. However you can easily proof, that the rootblock pointer in the uberblock of ZFS for example is pointing to blocks with absolutely equal content in all three locatition (with zdb -uu and zdb -r). It has to be that way, because they are protected by the same checksum. A number of devices offer block level dedup, either as an option or as part of their inner workings. However when you store three identical blocks on them and the devices does block level dedup internally, the device may just deduplicated your redundant metadata to a block stored just once that is stored on the non-voilatile storage. When this block is corrupted, you have essentially three corrupted copies. Three hit with one bullet. This is indeed an interesting problem: A device doing deduplication doesn't know if a block is important or just a datablock. This is the reason why I like deduplication like it's done in ZFS. It's an integrated part and so important parts don't get deduplicated away. A disk accessed by a block level interface doesn't know anything about the importance of a block. A metadata block is nothing different to it's inner mechanism than a normal data block because there is no way to tell that this is important and that those redundancies aren't allowed to fall prey to some clever deduplication mechanism. Robin talks about this in regard of the Sandforce disk controllers who use a kind of dedup to reduce some of the nasty effects of writing data to flash, but the problem is much broader. However this is relevant whenever you are using a device with block level deduplication. It's just the point that you have to activate it for most implementation by command, whereas certain devices do this by default or by design and you don't know about it. However I'm not perfectly sure about that ? given that storage administration and server administration are often different groups with different business objectives I would ask your storage guys if they have activated dedup without telling somebody elase on their boxes in order to speak less often with the storage sales rep. The problem is even more interesting with ZFS. You may use ditto blocks to protect important data to store multiple copies of data in the pool to increase redundancy, even when your pool just consists out of one disk or just a striped set of disk. However when your device is doing dedup internally it may remove your redundancy before it hits the nonvolatile storage. You've won nothing. Just spend your disk quota on the the LUNs in the SAN and you make your disk admin happy because of the good dedup ratio However you can just fall in this specific "deduped ditto block"trap when your pool just consists out of a single device, because ZFS writes ditto blocks on different disks, when there is more than just one disk. Yet another reason why you should spend some extra-thought when putting your zpool on a single LUN, especially when the LUN is sliced and dices out of a large heap of storage devices by a storage controller. However I have one problem with the articles and their specific mention of ZFS: You can just hit by this problem when you are using the deduplicating device for the pool. However in the specifically mentioned case of SSD this isn't the usecase. Most implementations of SSD in conjunction with ZFS are hybrid storage pools and so rotating rust disk is used as pool and SSD are used as L2ARC/sZIL. And there it simply doesn't matter: When you really have to resort to the sZIL (your system went down, it doesn't matter of one block or several blocks are corrupt, you have to fail back to the last known good transaction group the device. On the other side, when a block in L2ARC is corrupt, you simply read it from the pool and in HSP implementations this is the already mentioned rust. In conjunction with ZFS this is more interesting when using a storage array, that is capable to do dedup and where you use LUNs for your pool. However as mentioned before, on those devices it's a user made decision to do so, and so it's less probable that you deduplicating your redundancies. Other filesystems lacking acapability similar to hybrid storage pools are more "haunted" by this problem of SSD using dedup-like mechanisms internally, because those filesystem really store the data on the the SSD instead of using it just as accelerating devices. However at the end Robin is correct: It's jet another point why protecting your data by creating redundancies by dispersing it several disks (by mirror or parity RAIDs) is really important. No dedup mechanism inside a device can dedup away your redundancy when you write it to a totally different and indepenent device.

    Read the article

  • Using XA Transactions in Coherence-based Applications

    - by jpurdy
    While the costs of XA transactions are well known (e.g. increased data contention, higher latency, significant disk I/O for logging, availability challenges, etc.), in many cases they are the most attractive option for coordinating logical transactions across multiple resources. There are a few common approaches when integrating Coherence into applications via the use of an application server's transaction manager: Use of Coherence as a read-only cache, applying transactions to the underlying database (or any system of record) instead of the cache. Use of TransactionMap interface via the included resource adapter. Use of the new ACID transaction framework, introduced in Coherence 3.6.   Each of these may have significant drawbacks for certain workloads. Using Coherence as a read-only cache is the simplest option. In this approach, the application is responsible for managing both the database and the cache (either within the business logic or via application server hooks). This approach also tends to provide limited benefit for many workloads, particularly those workloads that either have queries (given the complexity of maintaining a fully cached data set in Coherence) or are not read-heavy (where the cost of managing the cache may outweigh the benefits of reading from it). All updates are made synchronously to the database, leaving it as both a source of latency as well as a potential bottleneck. This approach also prevents addressing "hot data" problems (when certain objects are updated by many concurrent transactions) since most database servers offer no facilities for explicitly controlling concurrent updates. Finally, this option tends to be a better fit for key-based access (rather than filter-based access such as queries) since this makes it easier to aggressively invalidate cache entries without worrying about when they will be reloaded. The advantage of this approach is that it allows strong data consistency as long as optimistic concurrency control is used to ensure that database updates are applied correctly regardless of whether the cache contains stale (or even dirty) data. Another benefit of this approach is that it avoids the limitations of Coherence's write-through caching implementation. TransactionMap is generally used when Coherence acts as system of record. TransactionMap is not generally compatible with write-through caching, so it will usually be either used to manage a standalone cache or when the cache is backed by a database via write-behind caching. TransactionMap has some restrictions that may limit its utility, the most significant being: The lock-based concurrency model is relatively inefficient and may introduce significant latency and contention. As an example, in a typical configuration, a transaction that updates 20 cache entries will require roughly 40ms just for lock management (assuming all locks are granted immediately, and excluding validation and writing which will require a similar amount of time). This may be partially mitigated by denormalizing (e.g. combining a parent object and its set of child objects into a single cache entry), at the cost of increasing false contention (e.g. transactions will conflict even when updating different child objects). If the client (application server JVM) fails during the commit phase, locks will be released immediately, and the transaction may be partially committed. In practice, this is usually not as bad as it may sound since the commit phase is usually very short (all locks having been previously acquired). Note that this vulnerability does not exist when a single NamedCache is used and all updates are confined to a single partition (generally implying the use of partition affinity). The unconventional TransactionMap API is cumbersome but manageable. Only a few methods are transactional, primarily get(), put() and remove(). The ACID transactions framework (accessed via the Connection class) provides atomicity guarantees by implementing the NamedCache interface, maintaining its own cache data and transaction logs inside a set of private partitioned caches. This feature may be used as either a local transactional resource or as logging XA resource. However, a lack of database integration precludes the use of this functionality for most applications. A side effect of this is that this feature has not seen significant adoption, meaning that any use of this is subject to the usual headaches associated with being an early adopter (greater chance of bugs and greater risk of hitting an unoptimized code path). As a result, for the moment, we generally recommend against using this feature. In summary, it is possible to use Coherence in XA-oriented applications, and several customers are doing this successfully, but it is not a core usage model for the product, so care should be taken before committing to this path. For most applications, the most robust solution is normally to use Coherence as a read-only cache of the underlying data resources, even if this prevents taking advantage of certain product features.

    Read the article

  • Viewing the NetBeans Central Registry (Part 2)

    - by Geertjan
    Jens Hofschröer, who has one of the very best NetBeans Platform blogs (if you more or less understand German), and who wrote, sometime ago, the initial version of the Import Statement Organizer, as well as being the main developer of a great gear design & manufacturing tool on the NetBeans Platform in Aachen, commented on my recent blog entry "Viewing the NetBeans Central Registry", where the root Node of the Central Registry is shown in a BeanTreeView, with the words: "I wrapped that Node in a FilterNode to provide the 'position' attribute and the 'file extension'. All Children are wrapped too. Then I used an OutlineView to show these two properties. Great tool to find wrong layer entries." I asked him for the code he describes above and he sent it to me. He discussed it here in his blog, while all the code involved can be read below. The result is as follows, where you can see that the OutlineView shows information that my simple implementation (via a BeanTreeView) kept hidden: And so here is the definition of the Node. class LayerPropertiesNode extends FilterNode { public LayerPropertiesNode(Node node) { super(node, isFolder(node) ? Children.create(new LayerPropertiesFactory(node), true) : Children.LEAF); } private static boolean isFolder(Node node) { return null != node.getLookup().lookup(DataFolder.class); } @Override public String getDisplayName() { return getLookup().lookup(FileObject.class).getName(); } @Override public Image getIcon(int type) { FileObject fo = getLookup().lookup(FileObject.class); try { DataObject data = DataObject.find(fo); return data.getNodeDelegate().getIcon(type); } catch (DataObjectNotFoundException ex) { Exceptions.printStackTrace(ex); } return super.getIcon(type); } @Override public Image getOpenedIcon(int type) { return getIcon(type); } @Override public PropertySet[] getPropertySets() { Set set = Sheet.createPropertiesSet(); set.put(new PropertySupport.ReadOnly<Integer>( "position", Integer.class, "Position", null) { @Override public Integer getValue() throws IllegalAccessException, InvocationTargetException { FileObject fileEntry = getLookup().lookup(FileObject.class); Integer posValue = (Integer) fileEntry.getAttribute("position"); return posValue != null ? posValue : Integer.valueOf(0); } }); set.put(new PropertySupport.ReadOnly<String>( "ext", String.class, "Extension", null) { @Override public String getValue() throws IllegalAccessException, InvocationTargetException { FileObject fileEntry = getLookup().lookup(FileObject.class); return fileEntry.getExt(); } }); PropertySet[] original = super.getPropertySets(); PropertySet[] withLayer = new PropertySet[original.length + 1]; System.arraycopy(original, 0, withLayer, 0, original.length); withLayer[withLayer.length - 1] = set; return withLayer; } private static class LayerPropertiesFactory extends ChildFactory<FileObject> { private final Node context; public LayerPropertiesFactory(Node context) { this.context = context; } @Override protected boolean createKeys(List<FileObject> list) { FileObject folder = context.getLookup().lookup(FileObject.class); FileObject[] children = folder.getChildren(); List<FileObject> ordered = FileUtil.getOrder(Arrays.asList(children), false); list.addAll(ordered); return true; } @Override protected Node createNodeForKey(FileObject key) { AbstractNode node = new AbstractNode(org.openide.nodes.Children.LEAF, key.isFolder() ? Lookups.fixed(key, DataFolder.findFolder(key)) : Lookups.singleton(key)); return new LayerPropertiesNode(node); } } } Then here is the definition of the Action, which pops up a JPanel, displaying an OutlineView: @ActionID(category = "Tools", id = "de.nigjo.nb.layerview.LayerViewAction") @ActionRegistration(displayName = "#CTL_LayerViewAction") @ActionReferences({ @ActionReference(path = "Menu/Tools", position = 1450, separatorBefore = 1425) }) @Messages("CTL_LayerViewAction=Display XML Layer") public final class LayerViewAction implements ActionListener { @Override public void actionPerformed(ActionEvent e) { try { Node node = DataObject.find(FileUtil.getConfigRoot()).getNodeDelegate(); node = new LayerPropertiesNode(node); node = new FilterNode(node) { @Override public Component getCustomizer() { LayerView view = new LayerView(); view.getExplorerManager().setRootContext(this); return view; } @Override public boolean hasCustomizer() { return true; } }; NodeOperation.getDefault().customize(node); } catch (DataObjectNotFoundException ex) { Exceptions.printStackTrace(ex); } } private static class LayerView extends JPanel implements ExplorerManager.Provider { private final ExplorerManager em; public LayerView() { super(new BorderLayout()); em = new ExplorerManager(); OutlineView view = new OutlineView("entry"); view.addPropertyColumn("position", "Position"); view.addPropertyColumn("ext", "Extension"); add(view); } @Override public ExplorerManager getExplorerManager() { return em; } } }

    Read the article

  • R12.0 Cash Management Consolidated Patch Collection (CPC) And R12.1 Cash Management Recommended Patch Collection (RPC)

    - by user793553
    If you have Oracle E-Business Suite's Cash Management (CE) application installed, you'll want to be sure to install the latest CPC (Consolidated Patch Collection) if you are using a R12.0 version of the apps, or the latest RPC (Recommended Patch Collection) for the R12.1 version of the apps. These collections give you all the fixes currently available for known issues in the specified versions of the application, including all of the latest Root Cause Analysis Fixes (RCAs)! What is an "RPC" (for R12.1 users)? Since the release of 12.1, a number of recommended patches for Oracle Cash Management have been made available as standalone patches to help address important business process issues. Adoption of these patches was highly recommended at the time, but not always implemented, so to further facilitate adoption of these patches, Oracle consolidated them into product-specific Recommended Patch Collections (RPCs) - a collection of recommended patches. They were created by Oracle Development with the following goals in mind: Stability: To address data integrity issues that have been identified by Oracle Development and Oracle Software Support as having the potential to interfere with the normal completion of important business processes (such as, period close, etc.). Root Cause Fixes (RCAs): To make available root cause fixes for known data integrity issues. Compact: To keep the file footprint as small as possible to help facilitate the install process and minimize testing. Granular: To compile the collection of patches based on functional areas, allowing a customer to apply multiple RPCs at once, or in phases (based on individual needs and goals). Where to start ALL R12 Cash Management users (R12.0 and R12.1 users) should start with the following Note on My Oracle Support (MOS): Doc ID 1367845.1: R12: Cash Management Recommended Patch Collections It's a great place for important implementation information about both sets of critical patch collections! For R12.1x users R12.1 users should also take a look at the documents below for even more information about the RPC for the R12.1.x versions of the Cash Management application, and other related available RPCs: Note Number  Title                                                                                                      1489997.1 Master Troubleshooting Guide for CE: Reconciliation & Clearing [VIDEO] 954704.1 EBS: R12.1 Oracle Financials Recommended Patch Collections (RPCs) 1316506.1 R12: Oracle CE: Upgrading from R11i to R12.1: Latest Recommended Patches Patch Wizard Utility While a patch may contain several hundred files, the impact on your system may actually be minimal. Patches contain hard prerequisites that are intended to make a patch work on a very low code baseline. The Patch Wizard Utility will give you a detailed analysis of the patch’s impact on your instance BEFORE it’s applied, so you’ll know exactly what to expect from the application. Please refer to Doc ID 976188.1 for more information on this important utility

    Read the article

  • What's New in SGD 5.1?

    - by Fat Bloke
    Oracle announced the latest version of Secure Global Desktop (SGD) this week with 3 major themes: Support for Android devices; Support for Desktop Chrome clients;  Support for Oracle Unified Directory. I'll talk about the new features in a moment, but a bit of context first: Oracle SGD - what, how and why?  Oracle Secure Global Desktop is Oracle's secure remote access product which allows users on almost any device, to access almost any type application which  is hosted in the data center, from almost any location. And it does this by sitting on the edge of the datacenter, between the user and the applications: This is actually a really smart environment for an increasing number of use cases where: Users need mobility of location AND device (i.e. work from anywhere); IT needs to ensure security of applications and data (of course!) The application requires an end-user environment which can't be guaranteed and IT may not own the client platform (e.g. BYOD, working from home, partners or contractors). Oracle has a a specific interest in this of course. As the leading supplier of enterprise applications, many of Oracle's customers, and indeed Oracle itself, fit these criteria. So, as an IT guy rolling out an application to your employees, if one of your apps absolutely needs, say,  IE10 with Java 6 update 32, how can you be sure that the user population has this, especially when they're using their own devices? In the SGD model you, the IT guy, can set up, say, a Windows Server running the exact environment required, and then use SGD to publish this app, without needing to worry any further about the device the end user is using. What's new?  So back to SGD 5.1 and what is new there: Android devices Since we introduced our support for iPad tablets in SGD 5.0 we've had a big demand from customers to extend this to Android tablets too, and so we're pleased to announce that 5.1 supports Android 4.x tablets such as Nexus 7 and 10, and the Galaxy Tab. Here's how it works, with screenshots from my Nexus 7: Simply point your browser to the SGD server URL and login; The workspace is the list of apps that the admin has deemed ok for you to run. You click on an application to run it (here's Excel and Oracle E-Business Suite): There's an extended on-screen keyboard (extended because desktop apps need keys that don't appear on a tablet keyboard such as ctrl, WIndow key, etc) and touch gestures can be mapped to desktop events (such as tap and hold to right click) All in all a pretty nice implementation for Android tablet users. Desktop Chrome Browsers SGD has always been designed around using a browser to access your applications. But traditionally, this has involved using Java to deliver the SGD client component. With HTML5 and Javascript engines becoming so powerful, we thought we'd see how well a pure web client could perform with desktop apps. And the answer was, surprisingly well. So with this release we now offer this additional way of working, which can be enabled by a simple bit of configuration. Here's a Linux desktop running in a tab in Chrome. And if you resize the browser window, the Linux desktop is resized by SGD too. Very cool! Oracle Unified Directory As I mentioned above, a lot of Oracle users already benefit from SGD. And a lot of Oracle customers use Oracle Unified Directory as their Enterprise and Carrier grade user directory. So it makes a lot of sense that SGD now supports this LDAP directory for both Authentication and as a means to determine which users get which applications, e.g. publish the engineering app to the guys in the Development group, but give everyone E-Business Suite to let them do their expenses. Summary With new devices, and faster 4G networking becoming more prevalent, the pressure for businesses to move to a increasingly mobile enterprise is stronger than ever. SGD is good for users, and even better for IT. By offering the user the ability to work from anywhere, and IT the control and security they need, everyone wins with SGD. To try this for yourself, download SGD 5.1 (look under Desktop Virtualization Products) from the Oracle Software Delivery Cloud or if you're an existing customer, get it from My Oracle Support.  -FB 

    Read the article

  • Unable to install Xdebug

    - by burnt1ce
    I've registered xdebug in php.ini (as per http://xdebug.org/docs/install) but it's not showing up when i run "php -m" or when i get a test page to run "phpinfo()". I've just installed the latest version of XAMPP. Can anyone provide any suggestions in getting xdebug to show up? This is what i get when i run phpinfo(). **PHP Version 5.3.1** System Windows NT ANDREW_LAPTOP 5.1 build 2600 (Windows XP Professional Service Pack 3) i586 Build Date Nov 20 2009 17:20:57 Compiler MSVC6 (Visual C++ 6.0) Architecture x86 Configure Command cscript /nologo configure.js "--enable-snapshot-build" Server API Apache 2.0 Handler Virtual Directory Support enabled Configuration File (php.ini) Path no value Loaded Configuration File C:\xampp\php\php.ini Scan this dir for additional .ini files (none) Additional .ini files parsed (none) PHP API 20090626 PHP Extension 20090626 Zend Extension 220090626 Zend Extension Build API220090626,TS,VC6 PHP Extension Build API20090626,TS,VC6 Debug Build no Thread Safety enabled Zend Memory Manager enabled Zend Multibyte Support disabled IPv6 Support enabled Registered PHP Streams https, ftps, php, file, glob, data, http, ftp, compress.zlib, compress.bzip2, phar, zip Registered Stream Socket Transports tcp, udp, ssl, sslv3, sslv2, tls Registered Stream Filters convert.iconv.*, string.rot13, string.toupper, string.tolower, string.strip_tags, convert.*, consumed, dechunk, zlib.*, bzip2.* This program makes use of the Zend Scripting Language Engine: Zend Engine v2.3.0, Copyright (c) 1998-2009 Zend Technologies PHP Credits Configuration apache2handler Apache Version Apache/2.2.14 (Win32) DAV/2 mod_ssl/2.2.14 OpenSSL/0.9.8l mod_autoindex_color PHP/5.3.1 mod_apreq2-20090110/2.7.1 mod_perl/2.0.4 Perl/v5.10.1 Apache API Version 20051115 Server Administrator postmaster@localhost Hostname:Port localhost:80 Max Requests Per Child: 0 - Keep Alive: on - Max Per Connection: 100 Timeouts Connection: 300 - Keep-Alive: 5 Virtual Server No Server Root C:/xampp/apache Loaded Modules core mod_win32 mpm_winnt http_core mod_so mod_actions mod_alias mod_asis mod_auth_basic mod_auth_digest mod_authn_default mod_authn_file mod_authz_default mod_authz_groupfile mod_authz_host mod_authz_user mod_cgi mod_dav mod_dav_fs mod_dav_lock mod_dir mod_env mod_headers mod_include mod_info mod_isapi mod_log_config mod_mime mod_negotiation mod_rewrite mod_setenvif mod_ssl mod_status mod_autoindex_color mod_php5 mod_perl mod_apreq2 Directive Local Value Master Value engine 1 1 last_modified 0 0 xbithack 0 0 Apache Environment Variable Value MIBDIRS C:/xampp/php/extras/mibs MYSQL_HOME C:\xampp\mysql\bin OPENSSL_CONF C:/xampp/apache/bin/openssl.cnf PHP_PEAR_SYSCONF_DIR C:\xampp\php PHPRC C:\xampp\php TMP C:\xampp\tmp HTTP_HOST localhost HTTP_CONNECTION keep-alive HTTP_USER_AGENT Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/533.2 (KHTML, like Gecko) Chrome/5.0.342.8 Safari/533.2 HTTP_CACHE_CONTROL max-age=0 HTTP_ACCEPT application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 HTTP_ACCEPT_ENCODING gzip,deflate,sdch HTTP_ACCEPT_LANGUAGE en-US,en;q=0.8 HTTP_ACCEPT_CHARSET ISO-8859-1,utf-8;q=0.7,*;q=0.3 PATH C:\Documents and Settings\Andrew\Local Settings\Application Data\Google\Chrome\Application;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;c:\Program Files\Microsoft SQL Server\100\Tools\Binn\;c:\Program Files\Microsoft SQL Server\100\DTS\Binn\;C:\Program Files\QuickTime\QTSystem\;C:\Program Files\Common Files\DivX Shared\;C:\Program Files\WiTopia.Net\bin SystemRoot C:\WINDOWS COMSPEC C:\WINDOWS\system32\cmd.exe PATHEXT .COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH WINDIR C:\WINDOWS SERVER_SIGNATURE <address>Apache/2.2.14 (Win32) DAV/2 mod_ssl/2.2.14 OpenSSL/0.9.8l mod_autoindex_color PHP/5.3.1 mod_apreq2-20090110/2.7.1 mod_perl/2.0.4 Perl/v5.10.1 Server at localhost Port 80</address> SERVER_SOFTWARE Apache/2.2.14 (Win32) DAV/2 mod_ssl/2.2.14 OpenSSL/0.9.8l mod_autoindex_color PHP/5.3.1 mod_apreq2-20090110/2.7.1 mod_perl/2.0.4 Perl/v5.10.1 SERVER_NAME localhost SERVER_ADDR 127.0.0.1 SERVER_PORT 80 REMOTE_ADDR 127.0.0.1 DOCUMENT_ROOT C:/xampp/htdocs SERVER_ADMIN postmaster@localhost SCRIPT_FILENAME C:/xampp/htdocs/test.php REMOTE_PORT 3275 GATEWAY_INTERFACE CGI/1.1 SERVER_PROTOCOL HTTP/1.1 REQUEST_METHOD GET QUERY_STRING no value REQUEST_URI /test.php SCRIPT_NAME /test.php HTTP Headers Information HTTP Request Headers HTTP Request GET /test.php HTTP/1.1 Host localhost Connection keep-alive User-Agent Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/533.2 (KHTML, like Gecko) Chrome/5.0.342.8 Safari/533.2 Cache-Control max-age=0 Accept application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 Accept-Encoding gzip,deflate,sdch Accept-Language en-US,en;q=0.8 Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.3 HTTP Response Headers X-Powered-By PHP/5.3.1 Keep-Alive timeout=5, max=80 Connection Keep-Alive Transfer-Encoding chunked Content-Type text/html bcmath BCMath support enabled Directive Local Value Master Value bcmath.scale 0 0 bz2 BZip2 Support Enabled Stream Wrapper support compress.bz2:// Stream Filter support bzip2.decompress, bzip2.compress BZip2 Version 1.0.5, 10-Dec-2007 calendar Calendar support enabled com_dotnet COM support enabled DCOM support disabled .Net support enabled Directive Local Value Master Value com.allow_dcom 0 0 com.autoregister_casesensitive 1 1 com.autoregister_typelib 0 0 com.autoregister_verbose 0 0 com.code_page no value no value com.typelib_file no value no value Core PHP Version 5.3.1 Directive Local Value Master Value allow_call_time_pass_reference On On allow_url_fopen On On allow_url_include Off Off always_populate_raw_post_data Off Off arg_separator.input & & arg_separator.output &amp; &amp; asp_tags Off Off auto_append_file no value no value auto_globals_jit On On auto_prepend_file no value no value browscap C:\xampp\php\extras\browscap.ini C:\xampp\php\extras\browscap.ini default_charset no value no value default_mimetype text/html text/html define_syslog_variables Off Off disable_classes no value no value disable_functions no value no value display_errors On On display_startup_errors On On doc_root no value no value docref_ext no value no value docref_root no value no value enable_dl On On error_append_string no value no value error_log no value no value error_prepend_string no value no value error_reporting 22519 22519 exit_on_timeout Off Off expose_php On On extension_dir C:\xampp\php\ext C:\xampp\php\ext file_uploads On On highlight.bg #FFFFFF #FFFFFF highlight.comment #FF8000 #FF8000 highlight.default #0000BB #0000BB highlight.html #000000 #000000 highlight.keyword #007700 #007700 highlight.string #DD0000 #DD0000 html_errors On On ignore_repeated_errors Off Off ignore_repeated_source Off Off ignore_user_abort Off Off implicit_flush Off Off include_path .;C:\xampp\php\PEAR .;C:\xampp\php\PEAR log_errors Off Off log_errors_max_len 1024 1024 magic_quotes_gpc Off Off magic_quotes_runtime Off Off magic_quotes_sybase Off Off mail.add_x_header Off Off mail.force_extra_parameters no value no value mail.log no value no value max_execution_time 60 60 max_file_uploads 20 20 max_input_nesting_level 64 64 max_input_time 60 60 memory_limit 128M 128M open_basedir no value no value output_buffering no value no value output_handler no value no value post_max_size 128M 128M precision 14 14 realpath_cache_size 16K 16K realpath_cache_ttl 120 120 register_argc_argv On On register_globals Off Off register_long_arrays Off Off report_memleaks On On report_zend_debug On On request_order no value no value safe_mode Off Off safe_mode_exec_dir no value no value safe_mode_gid Off Off safe_mode_include_dir no value no value sendmail_from no value no value sendmail_path no value no value serialize_precision 100 100 short_open_tag Off Off SMTP localhost localhost smtp_port 25 25 sql.safe_mode Off Off track_errors Off Off unserialize_callback_func no value no value upload_max_filesize 128M 128M upload_tmp_dir C:\xampp\tmp C:\xampp\tmp user_dir no value no value user_ini.cache_ttl 300 300 user_ini.filename .user.ini .user.ini variables_order GPCS GPCS xmlrpc_error_number 0 0 xmlrpc_errors Off Off y2k_compliance On On zend.enable_gc On On ctype ctype functions enabled date date/time support enabled "Olson" Timezone Database Version 2009.18 Timezone Database internal Default timezone America/New_York Directive Local Value Master Value date.default_latitude 31.7667 31.7667 date.default_longitude 35.2333 35.2333 date.sunrise_zenith 90.583333 90.583333 date.sunset_zenith 90.583333 90.583333 date.timezone America/New_York America/New_York dom DOM/XML enabled DOM/XML API Version 20031129 libxml Version 2.7.6 HTML Support enabled XPath Support enabled XPointer Support enabled Schema Support enabled RelaxNG Support enabled ereg Regex Library System library enabled exif EXIF Support enabled EXIF Version 1.4 $Id: exif.c 287372 2009-08-16 14:32:32Z iliaa $ Supported EXIF Version 0220 Supported filetypes JPEG,TIFF Directive Local Value Master Value exif.decode_jis_intel JIS JIS exif.decode_jis_motorola JIS JIS exif.decode_unicode_intel UCS-2LE UCS-2LE exif.decode_unicode_motorola UCS-2BE UCS-2BE exif.encode_jis no value no value exif.encode_unicode ISO-8859-15 ISO-8859-15 fileinfo fileinfo support enabled version 1.0.5-dev filter Input Validation and Filtering enabled Revision $Revision: 289434 $ Directive Local Value Master Value filter.default unsafe_raw unsafe_raw filter.default_flags no value no value ftp FTP support enabled gd GD Support enabled GD Version bundled (2.0.34 compatible) FreeType Support enabled FreeType Linkage with freetype FreeType Version 2.3.11 T1Lib Support enabled GIF Read Support enabled GIF Create Support enabled JPEG Support enabled libJPEG Version 7 PNG Support enabled libPNG Version 1.2.40 WBMP Support enabled XBM Support enabled JIS-mapped Japanese Font Support enabled Directive Local Value Master Value gd.jpeg_ignore_warning 0 0 gettext GetText Support enabled hash hash support enabled Hashing Engines md2 md4 md5 sha1 sha224 sha256 sha384 sha512 ripemd128 ripemd160 ripemd256 ripemd320 whirlpool tiger128,3 tiger160,3 tiger192,3 tiger128,4 tiger160,4 tiger192,4 snefru snefru256 gost adler32 crc32 crc32b salsa10 salsa20 haval128,3 haval160,3 haval192,3 haval224,3 haval256,3 haval128,4 haval160,4 haval192,4 haval224,4 haval256,4 haval128,5 haval160,5 haval192,5 haval224,5 haval256,5 iconv iconv support enabled iconv implementation "libiconv" iconv library version 1.13 Directive Local Value Master Value iconv.input_encoding ISO-8859-1 ISO-8859-1 iconv.internal_encoding ISO-8859-1 ISO-8859-1 iconv.output_encoding ISO-8859-1 ISO-8859-1 imap IMAP c-Client Version 2007e SSL Support enabled json json support enabled json version 1.2.1 libxml libXML support active libXML Compiled Version 2.7.6 libXML Loaded Version 20706 libXML streams enabled mbstring Multibyte Support enabled Multibyte string engine libmbfl HTTP input encoding translation disabled mbstring extension makes use of "streamable kanji code filter and converter", which is distributed under the GNU Lesser General Public License version 2.1. Multibyte (japanese) regex support enabled Multibyte regex (oniguruma) version 4.7.1 Directive Local Value Master Value mbstring.detect_order no value no value mbstring.encoding_translation Off Off mbstring.func_overload 0 0 mbstring.http_input pass pass mbstring.http_output pass pass mbstring.http_output_conv_mimetypes ^(text/|application/xhtml\+xml) ^(text/|application/xhtml\+xml) mbstring.internal_encoding no value no value mbstring.language neutral neutral mbstring.strict_detection Off Off mbstring.substitute_character no value no value mcrypt mcrypt support enabled Version 2.5.8 Api No 20021217 Supported ciphers cast-128 gost rijndael-128 twofish arcfour cast-256 loki97 rijndael-192 saferplus wake blowfish-compat des rijndael-256 serpent xtea blowfish enigma rc2 tripledes Supported modes cbc cfb ctr ecb ncfb nofb ofb stream Directive Local Value Master Value mcrypt.algorithms_dir no value no value mcrypt.modes_dir no value no value mhash MHASH support Enabled MHASH API Version Emulated Support ming Ming SWF output library enabled Version 0.4.3 mysql MySQL Support enabled Active Persistent Links 0 Active Links 0 Client API version 5.1.41 Directive Local Value Master Value mysql.allow_local_infile On On mysql.allow_persistent On On mysql.connect_timeout 60 60 mysql.default_host no value no value mysql.default_password no value no value mysql.default_port 3306 3306 mysql.default_socket MySQL MySQL mysql.default_user no value no value mysql.max_links Unlimited Unlimited mysql.max_persistent Unlimited Unlimited mysql.trace_mode Off Off mysqli MysqlI Support enabled Client API library version 5.1.41 Active Persistent Links 0 Inactive Persistent Links 0 Active Links 0 Client API header version 5.1.41 MYSQLI_SOCKET MySQL Directive Local Value Master Value mysqli.allow_local_infile On On mysqli.allow_persistent On On mysqli.default_host no value no value mysqli.default_port 3306 3306 mysqli.default_pw no value no value mysqli.default_socket MySQL MySQL mysqli.default_user no value no value mysqli.max_links Unlimited Unlimited mysqli.max_persistent Unlimited Unlimited mysqli.reconnect Off Off mysqlnd mysqlnd enabled Version mysqlnd 5.0.5-dev - 081106 - $Revision: 289630 $ Command buffer size 4096 Read buffer size 32768 Read timeout 31536000 Collecting statistics Yes Collecting memory statistics No Client statistics bytes_sent 0 bytes_received 0 packets_sent 0 packets_received 0 protocol_overhead_in 0 protocol_overhead_out 0 bytes_received_ok_packet 0 bytes_received_eof_packet 0 bytes_received_rset_header_packet 0 bytes_received_rset_field_meta_packet 0 bytes_received_rset_row_packet 0 bytes_received_prepare_response_packet 0 bytes_received_change_user_packet 0 packets_sent_command 0 packets_received_ok 0 packets_received_eof 0 packets_received_rset_header 0 packets_received_rset_field_meta 0 packets_received_rset_row 0 packets_received_prepare_response 0 packets_received_change_user 0 result_set_queries 0 non_result_set_queries 0 no_index_used 0 bad_index_used 0 slow_queries 0 buffered_sets 0 unbuffered_sets 0 ps_buffered_sets 0 ps_unbuffered_sets 0 flushed_normal_sets 0 flushed_ps_sets 0 ps_prepared_never_executed 0 ps_prepared_once_executed 0 rows_fetched_from_server_normal 0 rows_fetched_from_server_ps 0 rows_buffered_from_client_normal 0 rows_buffered_from_client_ps 0 rows_fetched_from_client_normal_buffered 0 rows_fetched_from_client_normal_unbuffered 0 rows_fetched_from_client_ps_buffered 0 rows_fetched_from_client_ps_unbuffered 0 rows_fetched_from_client_ps_cursor 0 rows_skipped_normal 0 rows_skipped_ps 0 copy_on_write_saved 0 copy_on_write_performed 0 command_buffer_too_small 0 connect_success 0 connect_failure 0 connection_reused 0 reconnect 0 pconnect_success 0 active_connections 0 active_persistent_connections 0 explicit_close 0 implicit_close 0 disconnect_close 0 in_middle_of_command_close 0 explicit_free_result 0 implicit_free_result 0 explicit_stmt_close 0 implicit_stmt_close 0 mem_emalloc_count 0 mem_emalloc_ammount 0 mem_ecalloc_count 0 mem_ecalloc_ammount 0 mem_erealloc_count 0 mem_erealloc_ammount 0 mem_efree_count 0 mem_malloc_count 0 mem_malloc_ammount 0 mem_calloc_count 0 mem_calloc_ammount 0 mem_realloc_count 0 mem_realloc_ammount 0 mem_free_count 0 proto_text_fetched_null 0 proto_text_fetched_bit 0 proto_text_fetched_tinyint 0 proto_text_fetched_short 0 proto_text_fetched_int24 0 proto_text_fetched_int 0 proto_text_fetched_bigint 0 proto_text_fetched_decimal 0 proto_text_fetched_float 0 proto_text_fetched_double 0 proto_text_fetched_date 0 proto_text_fetched_year 0 proto_text_fetched_time 0 proto_text_fetched_datetime 0 proto_text_fetched_timestamp 0 proto_text_fetched_string 0 proto_text_fetched_blob 0 proto_text_fetched_enum 0 proto_text_fetched_set 0 proto_text_fetched_geometry 0 proto_text_fetched_other 0 proto_binary_fetched_null 0 proto_binary_fetched_bit 0 proto_binary_fetched_tinyint 0 proto_binary_fetched_short 0 proto_binary_fetched_int24 0 proto_binary_fetched_int 0 proto_binary_fetched_bigint 0 proto_binary_fetched_decimal 0 proto_binary_fetched_float 0 proto_binary_fetched_double 0 proto_binary_fetched_date 0 proto_binary_fetched_year 0 proto_binary_fetched_time 0 proto_binary_fetched_datetime 0 proto_binary_fetched_timestamp 0 proto_binary_fetched_string 0 proto_binary_fetched_blob 0 proto_binary_fetched_enum 0 proto_binary_fetched_set 0 proto_binary_fetched_geometry 0 proto_binary_fetched_other 0 init_command_executed_count 0 init_command_failed_count 0 odbc ODBC Support enabled Active Persistent Links 0 Active Links 0 ODBC library Win32 Directive Local Value Master Value odbc.allow_persistent On On odbc.check_persistent On On odbc.default_cursortype Static cursor Static cursor odbc.default_db no value no value odbc.default_pw no value no value odbc.default_user no value no value odbc.defaultbinmode return as is return as is odbc.defaultlrl return up to 4096 bytes return up to 4096 bytes odbc.max_links Unlimited Unlimited odbc.max_persistent Unlimited Unlimited openssl OpenSSL support enabled OpenSSL Library Version OpenSSL 0.9.8l 5 Nov 2009 OpenSSL Header Version OpenSSL 0.9.8l 5 Nov 2009 pcre PCRE (Perl Compatible Regular Expressions) Support enabled PCRE Library Version 8.00 2009-10-19 Directive Local Value Master Value pcre.backtrack_limit 100000 100000 pcre.recursion_limit 100000 100000 pdf PDF Support enabled PDFlib GmbH Version 7.0.4p4 PECL Version 2.1.6 Revision $Revision: 277110 $ PDO PDO support enabled PDO drivers mysql, odbc, sqlite, sqlite2 pdo_mysql PDO Driver for MySQL enabled Client API version 5.1.41 PDO_ODBC PDO Driver for ODBC (Win32) enabled ODBC Connection Pooling Enabled, strict matching pdo_sqlite PDO Driver for SQLite 3.x enabled SQLite Library 3.6.20 Phar Phar: PHP Archive support enabled Phar EXT version 2.0.1 Phar API version 1.1.1 CVS revision $Revision: 286338 $ Phar-based phar archives enabled Tar-based phar archives enabled ZIP-based phar archives enabled gzip compression enabled bzip2 compression enabled Native OpenSSL support enabled Phar based on pear/PHP_Archive, original concept by Davey Shafik. Phar fully realized by Gregory Beaver and Marcus Boerger. Portions of tar implementation Copyright (c) 2003-2009 Tim Kientzle. Directive Local Value Master Value phar.cache_list no value no value phar.readonly On On phar.require_hash On On Reflection Reflection enabled Version $Revision: 287991 $ session Session Support enabled Registered save handlers files user sqlite Registered serializer handlers php php_binary wddx Directive Local Value Master Value session.auto_start Off Off session.bug_compat_42 On On session.bug_compat_warn On On session.cache_expire 180 180 session.cache_limiter nocache nocache session.cookie_domain no value no value session.cookie_httponly Off Off session.cookie_lifetime 0 0 session.cookie_path / / session.cookie_secure Off Off session.entropy_file no value no value session.entropy_length 0 0 session.gc_divisor 100 100 session.gc_maxlifetime 1440 1440 session.gc_probability 1 1 session.hash_bits_per_character 5 5 session.hash_function 0 0 session.name PHPSESSID PHPSESSID session.referer_check no value no value session.save_handler files files session.save_path C:\xampp\tmp C:\xampp\tmp session.serialize_handler php php session.use_cookies On On session.use_only_cookies Off Off session.use_trans_sid 0 0 SimpleXML Simplexml support enabled Revision $Revision: 281953 $ Schema support enabled soap Soap Client enabled Soap Server enabled Directive Local Value Master Value soap.wsdl_cache 1 1 soap.wsdl_cache_dir /tmp /tmp soap.wsdl_cache_enabled 1 1 soap.wsdl_cache_limit 5 5 soap.wsdl_cache_ttl 86400 86400 sockets Sockets Support enabled SPL SPL support enabled Interfaces Countable, OuterIterator, RecursiveIterator, SeekableIterator, SplObserver, SplSubject Classes AppendIterator, ArrayIterator, ArrayObject, BadFunctionCallException, BadMethodCallException, CachingIterator, DirectoryIterator, DomainException, EmptyIterator, FilesystemIterator, FilterIterator, GlobIterator, InfiniteIterator, InvalidArgumentException, IteratorIterator, LengthException, LimitIterator, LogicException, MultipleIterator, NoRewindIterator, OutOfBoundsException, OutOfRangeException, OverflowException, ParentIterator, RangeException, RecursiveArrayIterator, RecursiveCachingIterator, RecursiveDirectoryIterator, RecursiveFilterIterator, RecursiveIteratorIterator, RecursiveRegexIterator, RecursiveTreeIterator, RegexIterator, RuntimeException, SplDoublyLinkedList, SplFileInfo, SplFileObject, SplFixedArray, SplHeap, SplMinHeap, SplMaxHeap, SplObjectStorage, SplPriorityQueue, SplQueue, SplStack, SplTempFileObject, UnderflowException, UnexpectedValueException SQLite SQLite support enabled PECL Module version 2.0-dev $Id: sqlite.c 289598 2009-10-12 22:37:52Z pajoye $ SQLite Library 2.8.17 SQLite Encoding iso8859 Directive Local Value Master Value sqlite.assoc_case 0 0 sqlite3 SQLite3 support enabled SQLite3 module version 0.7-dev SQLite Library 3.6.20 Directive Local Value Master Value sqlite3.extension_dir no value no value standard Dynamic Library Support enabled Internal Sendmail Support for Windows enabled Directive Local Value Master Value assert.active 1 1 assert.bail 0 0 assert.callback no value no value assert.quiet_eval 0 0 assert.warning 1 1 auto_detect_line_endings 0 0 default_socket_timeout 60 60 safe_mode_allowed_env_vars PHP_ PHP_ safe_mode_protected_env_vars LD_LIBRARY_PATH LD_LIBRARY_PATH url_rewriter.tags a=href,area=href,frame=src,input=src,form=,fieldset= a=href,area=href,frame=src,input=src,form=,fieldset= user_agent no value no value tokenizer Tokenizer Support enabled wddx WDDX Support enabled WDDX Session Serializer enabled xml XML Support active XML Namespace Support active libxml2 Version 2.7.6 xmlreader XMLReader enabled xmlrpc core library version xmlrpc-epi v. 0.54 php extension version 0.51 author Dan Libby homepage http://xmlrpc-epi.sourceforge.net open sourced by Epinions.com xmlwriter XMLWriter enabled xsl XSL enabled libxslt Version 1.1.26 libxslt compiled against libxml Version 2.7.6 EXSLT enabled libexslt Version 1.1.26 zip Zip enabled Extension Version $Id: php_zip.c 276389 2009-02-24 23:55:14Z iliaa $ Zip version 1.9.1 Libzip version 0.9.0 zlib ZLib Support enabled Stream Wrapper support compress.zlib:// Stream Filter support zlib.inflate, zlib.deflate Compiled Version 1.2.3 Linked Version 1.2.3 Directive Local Value Master Value zlib.output_compression Off Off zlib.output_compression_level -1 -1 zlib.output_handler no value no value Additional Modules Module Name Environment Variable Value no value ::=::\ no value C:=C:\xampp ALLUSERSPROFILE C:\Documents and Settings\All Users APPDATA C:\Documents and Settings\Andrew\Application Data CHROME_RESTART Google Chrome|Whoa! Google Chrome has crashed. Restart now?|LEFT_TO_RIGHT CHROME_VERSION 5.0.342.8 CLASSPATH .;C:\Program Files\QuickTime\QTSystem\QTJava.zip CommonProgramFiles C:\Program Files\Common Files COMPUTERNAME ANDREW_LAPTOP ComSpec C:\WINDOWS\system32\cmd.exe FP_NO_HOST_CHECK NO HOMEDRIVE C: HOMEPATH \Documents and Settings\Andrew LOGONSERVER \\ANDREW_LAPTOP NUMBER_OF_PROCESSORS 2 OS Windows_NT PATH C:\Documents and Settings\Andrew\Local Settings\Application Data\Google\Chrome\Application;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;c:\Program Files\Microsoft SQL Server\100\Tools\Binn\;c:\Program Files\Microsoft SQL Server\100\DTS\Binn\;C:\Program Files\QuickTime\QTSystem\;C:\Program Files\Common Files\DivX Shared\;C:\Program Files\WiTopia.Net\bin PATHEXT .COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH PROCESSOR_ARCHITECTURE x86 PROCESSOR_IDENTIFIER x86 Family 6 Model 15 Stepping 10, GenuineIntel PROCESSOR_LEVEL 6 PROCESSOR_REVISION 0f0a ProgramFiles C:\Program Files PROMPT $P$G QTJAVA C:\Program Files\QuickTime\QTSystem\QTJava.zip SESSIONNAME Console sfxcmd "C:\Documents and Settings\Andrew\My Documents\Downloads\xampp-win32-1.7.3.exe" sfxname C:\Documents and Settings\Andrew\My Documents\Downloads\xampp-win32-1.7.3.exe SystemDrive C: SystemRoot C:\WINDOWS TEMP C:\DOCUME~1\Andrew\LOCALS~1\Temp TMP C:\DOCUME~1\Andrew\LOCALS~1\Temp USERDOMAIN ANDREW_LAPTOP USERNAME Andrew USERPROFILE C:\Documents and Settings\Andrew VS100COMNTOOLS C:\Program Files\Microsoft Visual Studio 10.0\Common7\Tools\ windir C:\WINDOWS AP_PARENT_PID 2216 PHP Variables Variable Value _SERVER["MIBDIRS"] C:/xampp/php/extras/mibs _SERVER["MYSQL_HOME"] C:\xampp\mysql\bin _SERVER["OPENSSL_CONF"] C:/xampp/apache/bin/openssl.cnf _SERVER["PHP_PEAR_SYSCONF_DIR"] C:\xampp\php _SERVER["PHPRC"] C:\xampp\php _SERVER["TMP"] C:\xampp\tmp _SERVER["HTTP_HOST"] localhost _SERVER["HTTP_CONNECTION"] keep-alive _SERVER["HTTP_USER_AGENT"] Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/533.2 (KHTML, like Gecko) Chrome/5.0.342.8 Safari/533.2 _SERVER["HTTP_CACHE_CONTROL"] max-age=0 _SERVER["HTTP_ACCEPT"] application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 _SERVER["HTTP_ACCEPT_ENCODING"] gzip,deflate,sdch _SERVER["HTTP_ACCEPT_LANGUAGE"] en-US,en;q=0.8 _SERVER["HTTP_ACCEPT_CHARSET"] ISO-8859-1,utf-8;q=0.7,*;q=0.3 _SERVER["PATH"] C:\Documents and Settings\Andrew\Local Settings\Application Data\Google\Chrome\Application;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;c:\Program Files\Microsoft SQL Server\100\Tools\Binn\;c:\Program Files\Microsoft SQL Server\100\DTS\Binn\;C:\Program Files\QuickTime\QTSystem\;C:\Program Files\Common Files\DivX Shared\;C:\Program Files\WiTopia.Net\bin _SERVER["SystemRoot"] C:\WINDOWS _SERVER["COMSPEC"] C:\WINDOWS\system32\cmd.exe _SERVER["PATHEXT"] .COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH _SERVER["WINDIR"] C:\WINDOWS _SERVER["SERVER_SIGNATURE"] <address>Apache/2.2.14 (Win32) DAV/2 mod_ssl/2.2.14 OpenSSL/0.9.8l mod_autoindex_color PHP/5.3.1 mod_apreq2-20090110/2.7.1 mod_perl/2.0.4 Perl/v5.10.1 Server at localhost Port 80</address> _SERVER["SERVER_SOFTWARE"] Apache/2.2.14 (Win32) DAV/2 mod_ssl/2.2.14 OpenSSL/0.9.8l mod_autoindex_color PHP/5.3.1 mod_apreq2-20090110/2.7.1 mod_perl/2.0.4 Perl/v5.10.1 _SERVER["SERVER_NAME"] localhost _SERVER["SERVER_ADDR"] 127.0.0.1 _SERVER["SERVER_PORT"] 80 _SERVER["REMOTE_ADDR"] 127.0.0.1 _SERVER["DOCUMENT_ROOT"] C:/xampp/htdocs _SERVER["SERVER_ADMIN"] postmaster@localhost _SERVER["SCRIPT_FILENAME"] C:/xampp/htdocs/test.php _SERVER["REMOTE_PORT"] 3275 _SERVER["GATEWAY_INTERFACE"] CGI/1.1 _SERVER["SERVER_PROTOCOL"] HTTP/1.1 _SERVER["REQUEST_METHOD"] GET _SERVER["QUERY_STRING"] no value _SERVER["REQUEST_URI"] /test.php _SERVER["SCRIPT_NAME"] /test.php _SERVER["PHP_SELF"] /test.php _SERVER["REQUEST_TIME"] 1270600868 _SERVER["argv"] Array ( ) _SERVER["argc"] 0 PHP License This program is free software; you can redistribute it and/or modify it under the terms of the PHP License as published by the PHP Group and included in the distribution in the file: LICENSE This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. If you did not receive a copy of the PHP license, or have any questions about PHP licensing, please contact [email protected].

    Read the article

  • Problems extracting information from RSS feed description field

    - by Graeme
    Hi, I've built an iPhone application using the parsing code from the TopSongs sample iPhone application. I've hit a problem though - the feed I'm trying to parse data from doesn't have a separate field for every piece of information (i.e. if it was for a feed about dogs, all the information such as dog type, dog age and dog price is contained in the feed. However, the TopSongs app relies on information having its own tags, so instead of using it uses and . So my question is this. How do I extract this information from the description field so that it can be parsed using the TopSongs parser? Can you somehow extract the dog age, price and type information using Yahoo Pipes and use that RSS feed for the feed? Or is there code that I can add to do it in application? Update: To view the code of my application parser (based on the TopSongs Core Data Apple provided application, see below. Here's a sample of one item from the the actual RSS feed I'm using (the description is longer, and has status,size, and a couple of other fields, but they're all formatted the same.: <item> <title>MOE, MARGRET STREET</title> <description> <b>District/Region:</b>&nbsp;REGION 09</br><b>Location:</b>&nbsp;MOE</br><b>Name:</b>&nbsp;MARGRET STREET</br></description> <pubDate>Thu,11 Mar 2010 05:43:03 GMT</pubDate> <guid>1266148</guid> </item> /* File: iTunesRSSImporter.m Abstract: Downloads, parses, and imports the iTunes top songs RSS feed into Core Data. Version: 1.1 Disclaimer: IMPORTANT: This Apple software is supplied to you by Apple Inc. ("Apple") in consideration of your agreement to the following terms, and your use, installation, modification or redistribution of this Apple software constitutes acceptance of these terms. If you do not agree with these terms, please do not use, install, modify or redistribute this Apple software. In consideration of your agreement to abide by the following terms, and subject to these terms, Apple grants you a personal, non-exclusive license, under Apple's copyrights in this original Apple software (the "Apple Software"), to use, reproduce, modify and redistribute the Apple Software, with or without modifications, in source and/or binary forms; provided that if you redistribute the Apple Software in its entirety and without modifications, you must retain this notice and the following text and disclaimers in all such redistributions of the Apple Software. Neither the name, trademarks, service marks or logos of Apple Inc. may be used to endorse or promote products derived from the Apple Software without specific prior written permission from Apple. Except as expressly stated in this notice, no other rights or licenses, express or implied, are granted by Apple herein, including but not limited to any patent rights that may be infringed by your derivative works or by other works in which the Apple Software may be incorporated. The Apple Software is provided by Apple on an "AS IS" basis. APPLE MAKES NO WARRANTIES, EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, REGARDING THE APPLE SOFTWARE OR ITS USE AND OPERATION ALONE OR IN COMBINATION WITH YOUR PRODUCTS. IN NO EVENT SHALL APPLE BE LIABLE FOR ANY SPECIAL, INDIRECT, INCIDENTAL OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) ARISING IN ANY WAY OUT OF THE USE, REPRODUCTION, MODIFICATION AND/OR DISTRIBUTION OF THE APPLE SOFTWARE, HOWEVER CAUSED AND WHETHER UNDER THEORY OF CONTRACT, TORT (INCLUDING NEGLIGENCE), STRICT LIABILITY OR OTHERWISE, EVEN IF APPLE HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Copyright (C) 2009 Apple Inc. All Rights Reserved. */ #import "iTunesRSSImporter.h" #import "Song.h" #import "Category.h" #import "CategoryCache.h" #import <libxml/tree.h> // Function prototypes for SAX callbacks. This sample implements a minimal subset of SAX callbacks. // Depending on your application's needs, you might want to implement more callbacks. static void startElementSAX(void *context, const xmlChar *localname, const xmlChar *prefix, const xmlChar *URI, int nb_namespaces, const xmlChar **namespaces, int nb_attributes, int nb_defaulted, const xmlChar **attributes); static void endElementSAX(void *context, const xmlChar *localname, const xmlChar *prefix, const xmlChar *URI); static void charactersFoundSAX(void *context, const xmlChar *characters, int length); static void errorEncounteredSAX(void *context, const char *errorMessage, ...); // Forward reference. The structure is defined in full at the end of the file. static xmlSAXHandler simpleSAXHandlerStruct; // Class extension for private properties and methods. @interface iTunesRSSImporter () @property BOOL storingCharacters; @property (nonatomic, retain) NSMutableData *characterBuffer; @property BOOL done; @property BOOL parsingASong; @property NSUInteger countForCurrentBatch; @property (nonatomic, retain) Song *currentSong; @property (nonatomic, retain) NSURLConnection *rssConnection; @property (nonatomic, retain) NSDateFormatter *dateFormatter; // The autorelease pool property is assign because autorelease pools cannot be retained. @property (nonatomic, assign) NSAutoreleasePool *importPool; @end static double lookuptime = 0; @implementation iTunesRSSImporter @synthesize iTunesURL, delegate, persistentStoreCoordinator; @synthesize rssConnection, done, parsingASong, storingCharacters, currentSong, countForCurrentBatch, characterBuffer, dateFormatter, importPool; - (void)dealloc { [iTunesURL release]; [characterBuffer release]; [currentSong release]; [rssConnection release]; [dateFormatter release]; [persistentStoreCoordinator release]; [insertionContext release]; [songEntityDescription release]; [theCache release]; [super dealloc]; } - (void)main { self.importPool = [[NSAutoreleasePool alloc] init]; if (delegate && [delegate respondsToSelector:@selector(importerDidSave:)]) { [[NSNotificationCenter defaultCenter] addObserver:delegate selector:@selector(importerDidSave:) name:NSManagedObjectContextDidSaveNotification object:self.insertionContext]; } done = NO; self.dateFormatter = [[[NSDateFormatter alloc] init] autorelease]; [dateFormatter setDateStyle:NSDateFormatterLongStyle]; [dateFormatter setTimeStyle:NSDateFormatterNoStyle]; // necessary because iTunes RSS feed is not localized, so if the device region has been set to other than US // the date formatter must be set to US locale in order to parse the dates [dateFormatter setLocale:[[[NSLocale alloc] initWithLocaleIdentifier:@"US"] autorelease]]; self.characterBuffer = [NSMutableData data]; NSURLRequest *theRequest = [NSURLRequest requestWithURL:iTunesURL]; // create the connection with the request and start loading the data rssConnection = [[NSURLConnection alloc] initWithRequest:theRequest delegate:self]; // This creates a context for "push" parsing in which chunks of data that are not "well balanced" can be passed // to the context for streaming parsing. The handler structure defined above will be used for all the parsing. // The second argument, self, will be passed as user data to each of the SAX handlers. The last three arguments // are left blank to avoid creating a tree in memory. context = xmlCreatePushParserCtxt(&simpleSAXHandlerStruct, self, NULL, 0, NULL); if (rssConnection != nil) { do { [[NSRunLoop currentRunLoop] runMode:NSDefaultRunLoopMode beforeDate:[NSDate distantFuture]]; } while (!done); } // Display the total time spent finding a specific object for a relationship NSLog(@"lookup time %f", lookuptime); // Release resources used only in this thread. xmlFreeParserCtxt(context); self.characterBuffer = nil; self.dateFormatter = nil; self.rssConnection = nil; self.currentSong = nil; [theCache release]; theCache = nil; NSError *saveError = nil; NSAssert1([insertionContext save:&saveError], @"Unhandled error saving managed object context in import thread: %@", [saveError localizedDescription]); if (delegate && [delegate respondsToSelector:@selector(importerDidSave:)]) { [[NSNotificationCenter defaultCenter] removeObserver:delegate name:NSManagedObjectContextDidSaveNotification object:self.insertionContext]; } if (self.delegate != nil && [self.delegate respondsToSelector:@selector(importerDidFinishParsingData:)]) { [self.delegate importerDidFinishParsingData:self]; } [importPool release]; self.importPool = nil; } - (NSManagedObjectContext *)insertionContext { if (insertionContext == nil) { insertionContext = [[NSManagedObjectContext alloc] init]; [insertionContext setPersistentStoreCoordinator:self.persistentStoreCoordinator]; } return insertionContext; } - (void)forwardError:(NSError *)error { if (self.delegate != nil && [self.delegate respondsToSelector:@selector(importer:didFailWithError:)]) { [self.delegate importer:self didFailWithError:error]; } } - (NSEntityDescription *)songEntityDescription { if (songEntityDescription == nil) { songEntityDescription = [[NSEntityDescription entityForName:@"Song" inManagedObjectContext:self.insertionContext] retain]; } return songEntityDescription; } - (CategoryCache *)theCache { if (theCache == nil) { theCache = [[CategoryCache alloc] init]; theCache.managedObjectContext = self.insertionContext; } return theCache; } - (Song *)currentSong { if (currentSong == nil) { currentSong = [[Song alloc] initWithEntity:self.songEntityDescription insertIntoManagedObjectContext:self.insertionContext]; } return currentSong; } #pragma mark NSURLConnection Delegate methods // Forward errors to the delegate. - (void)connection:(NSURLConnection *)connection didFailWithError:(NSError *)error { [self performSelectorOnMainThread:@selector(forwardError:) withObject:error waitUntilDone:NO]; // Set the condition which ends the run loop. done = YES; } // Called when a chunk of data has been downloaded. - (void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data { // Process the downloaded chunk of data. xmlParseChunk(context, (const char *)[data bytes], [data length], 0); } - (void)connectionDidFinishLoading:(NSURLConnection *)connection { // Signal the context that parsing is complete by passing "1" as the last parameter. xmlParseChunk(context, NULL, 0, 1); context = NULL; // Set the condition which ends the run loop. done = YES; } #pragma mark Parsing support methods static const NSUInteger kImportBatchSize = 20; - (void)finishedCurrentSong { parsingASong = NO; self.currentSong = nil; countForCurrentBatch++; // Periodically purge the autorelease pool and save the context. The frequency of this action may need to be tuned according to the // size of the objects being parsed. The goal is to keep the autorelease pool from growing too large, but // taking this action too frequently would be wasteful and reduce performance. if (countForCurrentBatch == kImportBatchSize) { [importPool release]; self.importPool = [[NSAutoreleasePool alloc] init]; NSError *saveError = nil; NSAssert1([insertionContext save:&saveError], @"Unhandled error saving managed object context in import thread: %@", [saveError localizedDescription]); countForCurrentBatch = 0; } } /* Character data is appended to a buffer until the current element ends. */ - (void)appendCharacters:(const char *)charactersFound length:(NSInteger)length { [characterBuffer appendBytes:charactersFound length:length]; } - (NSString *)currentString { // Create a string with the character data using UTF-8 encoding. UTF-8 is the default XML data encoding. NSString *currentString = [[[NSString alloc] initWithData:characterBuffer encoding:NSUTF8StringEncoding] autorelease]; [characterBuffer setLength:0]; return currentString; } @end #pragma mark SAX Parsing Callbacks // The following constants are the XML element names and their string lengths for parsing comparison. // The lengths include the null terminator, to ensure exact matches. static const char *kName_Item = "item"; static const NSUInteger kLength_Item = 5; static const char *kName_Title = "title"; static const NSUInteger kLength_Title = 6; static const char *kName_Category = "category"; static const NSUInteger kLength_Category = 9; static const char *kName_Itms = "itms"; static const NSUInteger kLength_Itms = 5; static const char *kName_Artist = "description"; static const NSUInteger kLength_Artist = 7; static const char *kName_Album = "description"; static const NSUInteger kLength_Album = 6; static const char *kName_ReleaseDate = "releasedate"; static const NSUInteger kLength_ReleaseDate = 12; /* This callback is invoked when the importer finds the beginning of a node in the XML. For this application, out parsing needs are relatively modest - we need only match the node name. An "item" node is a record of data about a song. In that case we create a new Song object. The other nodes of interest are several of the child nodes of the Song currently being parsed. For those nodes we want to accumulate the character data in a buffer. Some of the child nodes use a namespace prefix. */ static void startElementSAX(void *parsingContext, const xmlChar *localname, const xmlChar *prefix, const xmlChar *URI, int nb_namespaces, const xmlChar **namespaces, int nb_attributes, int nb_defaulted, const xmlChar **attributes) { iTunesRSSImporter *importer = (iTunesRSSImporter *)parsingContext; // The second parameter to strncmp is the name of the element, which we known from the XML schema of the feed. // The third parameter to strncmp is the number of characters in the element name, plus 1 for the null terminator. if (prefix == NULL && !strncmp((const char *)localname, kName_Item, kLength_Item)) { importer.parsingASong = YES; } else if (importer.parsingASong && ( (prefix == NULL && (!strncmp((const char *)localname, kName_Title, kLength_Title) || !strncmp((const char *)localname, kName_Category, kLength_Category))) || ((prefix != NULL && !strncmp((const char *)prefix, kName_Itms, kLength_Itms)) && (!strncmp((const char *)localname, kName_Artist, kLength_Artist) || !strncmp((const char *)localname, kName_Album, kLength_Album) || !strncmp((const char *)localname, kName_ReleaseDate, kLength_ReleaseDate))) )) { importer.storingCharacters = YES; } } /* This callback is invoked when the parse reaches the end of a node. At that point we finish processing that node, if it is of interest to us. For "item" nodes, that means we have completed parsing a Song object. We pass the song to a method in the superclass which will eventually deliver it to the delegate. For the other nodes we care about, this means we have all the character data. The next step is to create an NSString using the buffer contents and store that with the current Song object. */ static void endElementSAX(void *parsingContext, const xmlChar *localname, const xmlChar *prefix, const xmlChar *URI) { iTunesRSSImporter *importer = (iTunesRSSImporter *)parsingContext; if (importer.parsingASong == NO) return; if (prefix == NULL) { if (!strncmp((const char *)localname, kName_Item, kLength_Item)) { [importer finishedCurrentSong]; } else if (!strncmp((const char *)localname, kName_Title, kLength_Title)) { importer.currentSong.title = importer.currentString; } else if (!strncmp((const char *)localname, kName_Category, kLength_Category)) { double before = [NSDate timeIntervalSinceReferenceDate]; Category *category = [importer.theCache categoryWithName:importer.currentString]; double delta = [NSDate timeIntervalSinceReferenceDate] - before; lookuptime += delta; importer.currentSong.category = category; } } else if (!strncmp((const char *)prefix, kName_Itms, kLength_Itms)) { if (!strncmp((const char *)localname, kName_Artist, kLength_Artist)) { NSString *string = importer.currentSong.artist; NSArray *strings = [string componentsSeparatedByString: @", "]; //importer.currentSong.artist = importer.currentString; } else if (!strncmp((const char *)localname, kName_Album, kLength_Album)) { importer.currentSong.album = importer.currentString; } else if (!strncmp((const char *)localname, kName_ReleaseDate, kLength_ReleaseDate)) { NSString *dateString = importer.currentString; importer.currentSong.releaseDate = [importer.dateFormatter dateFromString:dateString]; } } importer.storingCharacters = NO; } /* This callback is invoked when the parser encounters character data inside a node. The importer class determines how to use the character data. */ static void charactersFoundSAX(void *parsingContext, const xmlChar *characterArray, int numberOfCharacters) { iTunesRSSImporter *importer = (iTunesRSSImporter *)parsingContext; // A state variable, "storingCharacters", is set when nodes of interest begin and end. // This determines whether character data is handled or ignored. if (importer.storingCharacters == NO) return; [importer appendCharacters:(const char *)characterArray length:numberOfCharacters]; } /* A production application should include robust error handling as part of its parsing implementation. The specifics of how errors are handled depends on the application. */ static void errorEncounteredSAX(void *parsingContext, const char *errorMessage, ...) { // Handle errors as appropriate for your application. NSCAssert(NO, @"Unhandled error encountered during SAX parse."); } // The handler struct has positions for a large number of callback functions. If NULL is supplied at a given position, // that callback functionality won't be used. Refer to libxml documentation at http://www.xmlsoft.org for more information // about the SAX callbacks. static xmlSAXHandler simpleSAXHandlerStruct = { NULL, /* internalSubset */ NULL, /* isStandalone */ NULL, /* hasInternalSubset */ NULL, /* hasExternalSubset */ NULL, /* resolveEntity */ NULL, /* getEntity */ NULL, /* entityDecl */ NULL, /* notationDecl */ NULL, /* attributeDecl */ NULL, /* elementDecl */ NULL, /* unparsedEntityDecl */ NULL, /* setDocumentLocator */ NULL, /* startDocument */ NULL, /* endDocument */ NULL, /* startElement*/ NULL, /* endElement */ NULL, /* reference */ charactersFoundSAX, /* characters */ NULL, /* ignorableWhitespace */ NULL, /* processingInstruction */ NULL, /* comment */ NULL, /* warning */ errorEncounteredSAX, /* error */ NULL, /* fatalError //: unused error() get all the errors */ NULL, /* getParameterEntity */ NULL, /* cdataBlock */ NULL, /* externalSubset */ XML_SAX2_MAGIC, // NULL, startElementSAX, /* startElementNs */ endElementSAX, /* endElementNs */ NULL, /* serror */ }; Thanks.

    Read the article

  • client problems - misaligned expectations & not following SDLC protocols

    - by louism
    hi guys, im having some serious problems with a client on a project - i could use some advice please the short version i have been working with this client now for almost 6 months without any problems (a classified website project in the range of 500 hours) over the last few days things have drastically deteriorated to the point where ive had to place the project on-hold whilst i work-out what to do (this has pissed the client off even more) to be simplistic, the root cause of the issue is this: the client doesnt read the specs i make for him, i code the feature, he than wants to change things, i tell him its not to the agreed spec and that that change will have to be postponed and possibly charged for, he gets upset and rants saying 'hes paid for the feature' and im not keeping to the agreement (<- misalignment of expectations) i think the root cause of the root cause is my clients failure to take my SDLC protocols seriously. i have a bug tracking system in place which he practically refuses to use (he still emails me bugs), he doesnt seem to care to much for the protocols i use for dealing with scope creep and change control the whole situation came to a head recently where he 'cracked it' (an aussie term for being fed-up). the more terms like 'postponed for post-launch implementation', 'costed feature addition', and 'not to agreed spec' i kept using, the worse it got finally, he began to bully me - basically insisting i shut-up and do the work im being paid for. i wrote a long-winded email explaining how wrong he was on all these different points, and explaining what all the SDLC protocols do to protect the success of the project. than i deleted that email and wrote a new one in the new email, i suggested as a solution i write up a list of grievances we both had. we than review the list and compromise on different points: he gets some things he wants, i get some things i want. sometimes youve got to give ground to get ground his response to this suggestion was flat-out refusal, and a restatement that i should just get on with the work ive been paid to do so there you have the very subjective short version. if you have the time and inclination, the long version may be a little less bias as it has the email communiques between me and my client the long version (with background) the long version works by me showing you the email communiques which lead to the situation coming to a head. so here it is, judge for yourself where the trouble started... 1. client asked me why something was missing from a feature i just uploaded, my response was to show him what was in the spec: it basically said the item he was looking for was never going to be included 2. [clients response...] Memo Louis, We are following your own title fields and keeping a consistent layout. Why the big fuss about not adding "Part". It simply replaces "model" and is consistent with your current title fields. 3. [my response...] hi [client], the 'part' field appeared to me as a redundancy / mistake. i requested clarification but never received any in a timely manner (about 2 weeks ago) the specification for this feature also indicated it wasnt going to be included: RE: "Why the big fuss about not adding "Part" " it may not appear so, but it would actually be a lot of work for me to now add a 'Part' field it could take me up to 15-20 minutes to properly explain why its such a big undertaking to do this, but i would prefer to use that time instead to work on completing your v1.1 features as a simplistic explanation - it connects to the change in paradigm from a 'generic classified ad' model to a 'specific attributes for specific categories' model basically, i am saying it is a big fuss, but i understand that it doesnt look that way - after all, it is just one ity-bitty field :) if you require a fuller explanation, please let me know and i will commit the time needed to write that out also, if you recall when we first started on the project, i said that with the effort/time required for features, you would likely not know off the top of your head. you may think something is really complex, but in reality its quite simple, you might think something is easy - but it could actually be a massive trauma to code (which is the case here with the 'Part' field). if you also recalled, i said the best course of action is to just ask, and i would let you know on a case-by-case basis 4. [email from me to client...] hi [client], the online catalogue page is now up live (see my email from a few days ago for information on how it works) note: the window of opportunity for input/revisions on what data the catalogue stores has now closed (as i have put the code up live now) RE: the UI/layout of the online catalogue page you may still do visual/ui tweaks to the page at the moment (this window for input/revisions will close in a couple of days time) 5. [email from client to me...] *(note: i had put up the feature & asked the client to review it, never heard back from them for a few days)* Memo Louis, Here you go again. CLOSED without a word of input from the customer. I don't think so. I will reply tomorrow regarding the content and functionality we require from this feature. 5. [from me to client...] hi [client]: RE: from my understanding, you are saying that the mini-sale yard control would change itself based on the fact someone was viewing for parts & accessories <- is that correct? this change is outside the scope of the v1.1 mini-spec and therefore will need to wait 'til post launch for costing/implementation 6. [email from client to me...] Memo Louis, Following your v1.1 mini-spec and all your time paid in full for the work selected. We need to make the situation clear. There will be no further items held for post-launch. Do not expect us to pay for any further items other than those we have agreed upon. You have undertaken to complete the Parts and accessories feature as follows. Obviously, as part of this process the "mini search" will be effected, and will require "adaption to make sense". 7. [email from me to client...] hi [client], RE: "There will be no further items held for post-launch. Do not expect us to pay for any further items other than those we have agreed upon." a few points to consider: 1) the specification for the 'parts & accessories' feature was as follows: (i.e. [what] "...we have agreed upon.") 2) you have received the 'parts & accessories' feature free of charge (you have paid $0 for it). ive spent two days coding that feature as a gesture of good will i would request that you please consider these two facts carefully and sincerely 8. [email from client to me...] Memo Louis, I don't see how you are giving us anything for free. From your original fee proposal you have deleted more than 30 hours of included features. Your title "shelved features". Further you have charged us twice by adding back into the site, at an addition cost, some of those "shelved features" features. See v1.1 mini-spec. Did include in your original fee proposal a change request budget but then charge without discussion items included in v1.1 mini-spec. Included a further Features test plan for a regression test, a fee of 10 hours that would not have been required if the "shelved features" were not left out of the agreed fee proposal. I have made every attempt to satisfy your your uneven business sense by offering you everything your heart desired, in the v1.1 mini-spec, to be left once again with your attitude of "its too hard, lets leave it for post launch". I am no longer accepting anything less than what we have contracted you to do. That is clearly defined in v1.1 mini-spec, and you are paid in advance for delivering those items as an acceptable function. a few notes about the above email... i had to cull features from the original spec because it didnt fit into the budget. i explained this to the client at the start of the project (he wanted more features than he had budget hours to do them all) nothing has been charged for twice, i didnt charge the client for culled features. im charging him to now do those culled features the draft version of the project schedule included a change request budget of 10 hours, but i had to remove that to meet the budget (the client may not have been aware of this to be fair to them) what the client refers to as my attitude of 'too hard/leave it for post-launch', i called a change request protocol and a method for keeping scope creep under control 9. [email from me to client...] hi [client], RE: "...all your grievances..." i had originally written out a long email response; it was fantastic, it had all these great points of how 'you were wrong' and 'i was right', you would of loved it (and by 'loved it', i mean it would of just infuriated you more) so, i decided to deleted it start over, for two reasons: 1) a long email is being disrespectful of your time (youre a busy businessman with things to do) 2) whos wrong or right gets us no closer to fixing the problems we are experiencing what i propose is this... i prepare a bullet point list of your grievances and my grievances (yes, im unhappy too about how things are going - and it has little to do with money) i submit this list to you for you to add to as necessary we then both take a good hard look at this list, and we decide which areas we are willing to give ground on as an example, the list may look something like this: "louis, you keep taking away features you said you would do" [your grievance 2] [your grievance 3] [your grievance ...] "[client], i feel you dont properly read the specs i prepare for you..." [my grievance 2] [my grievance 3] [my grievance ...] if you are willing to give this a try, let me know will it work? who knows. but if it doesnt, we can always go back to arguing some more :) obviously, this will only work if you are willing to give it a genuine try, and you can accept that you may have to 'give some ground to get some ground' what do you think? 10. [email from client to me ...] Memo Louis, Instead of wasting your time listing grievances, I would prefer you complete the items in v1.1 mini-spec, to a satisfactory conclusion. We almost had the website ready for launch until you brought the v1.1 mini-spec into the frame. Obviously I expected you could complete the v1.1 mini-spec in a two-week time frame as you indicated and give the site a more profession presentation. Most of the problems have been caused by you not following our instructions, but deciding to do what you feel like at the time. And then arguing with us how the missing information is not necessary. For instance "Parts and Accessories". Why on earth would you leave out the parts heading, when it ties-in with the fields you have already developed. It replaces "model" and is just as important in the context of information that appears in the "Details" panel. We are at a stage where the the v1.1 mini-spec needs to be completed without further time wasting and the site is complete (subject to all features working). We are on standby at this end to do just that. Let me know when you are back, working on the site and we will process and complete each v1.1 mini-spec, item by item, until the job is complete. 11. [last email from me to client...] hi [client], based on this reply, and your demonstrated unwillingness to compromise/give any ground on issues at hand, i have decided to place your project on-hold for the moment i will be considering further options on how to over-come our challenges over the next few days i will contact you by monday 17/may to discuss any new options i have come up with, and if i believe it is appropriate to restart work on your project at that point or not told you it was long... what do you think?

    Read the article

  • Html.DropDownListFor() doesn't always render selected value

    - by Andrey
    I have a view model: public class LanguagesViewModel { public IEnumerable<LanguageItem> Languages { get; set; } public IEnumerable<SelectListItem> LanguageItems { get; set; } public IEnumerable<SelectListItem> LanguageLevelItems { get; set; } } public class LanguageItem { public int LanguageId { get; set; } public int SpeakingSkillId { get; set; } public int WritingSkillId { get; set; } public int UnderstandingSkillId { get; set; } public LanguagesViewModel Lvm { get; internal set; } } It's rendered with the following code: <tbody> <% foreach( var language in Model.Languages ) { Html.RenderPartial("LanguageItem", language); } %> </tbody> LanguageItem.ascx: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<HrmCV.Web.ViewModels.LanguageItem>" %> <tr id="lngRow"> <td class="a_left"> <%: Html.DropDownListFor(m => m.LanguageId, Model.Lvm.LanguageItems, null, new { @class = "span-3" }) %> </td> <td> <%: Html.DropDownListFor(m => m.SpeakingSkillId, Model.Lvm.LanguageLevelItems, null, new { @class = "span-3" }) %> </td> <td> <%: Html.DropDownListFor(m => m.WritingSkillId, Model.Lvm.LanguageLevelItems, null, new { @class = "span-3" })%> </td> <td> <%: Html.DropDownListFor(m => m.UnderstandingSkillId, Model.Lvm.LanguageLevelItems, null, new { @class = "span-3" })%> </td> <td> <div class="btn-primary"> <a class="btn-primary-l" onclick="DeleteLanguage(this.id)" id="btnDel" href="javascript:void(0)"><%:ViewResources.SharedStrings.BtnDelete%></a> <span class="btn-primary-r"></span> </div> </td> </tr> The problem is that upon POST, LanguageId dropdown doesn't render its previously selected value. While all other dropdowns - do render. I cannot see any differences in the implementation between them. What is the reason for such behavior?

    Read the article

  • Why Moq is thorwing "expected Invocation on the mock at least once". Where as it is being set once,e

    - by Mohit
    Following is the code. create a class lib add the ref to NUnit framework 2.5.3.9345 and Moq.dll 4.0.0.0 and paste the following code. Try running it on my machine it throws TestCase 'MoqTest.TryClassTest.IsMessageNotNull' failed: Moq.MockException : Expected invocation on the mock at least once, but was never performed: v = v.Model = It.Is(value(Moq.It+<c__DisplayClass21[MoqTest.GenInfo]).match) at Moq.Mock.ThrowVerifyException(IProxyCall expected, Expression expression, Times times, Int32 callCount) at Moq.Mock.VerifyCalls(Interceptor targetInterceptor, MethodCall expected, Expression expression, Times times) at Moq.Mock.VerifySet[T](Mock1 mock, Action1 setterExpression, Times times, String failMessage) at Moq.Mock1.VerifySet(Action`1 setterExpression) Class1.cs(22,0): at MoqTest.TryClassTest.IsMessageNotNull() using System; using System.Collections.Generic; using System.Linq; using System.Text; using Moq; using NUnit.Framework; namespace MoqTest { [TestFixture] public class TryClassTest { [Test] public void IsMessageNotNull() { var mockView = new Mock<IView<GenInfo>>(); mockView.Setup(v => v.ModuleId).Returns(""); TryPresenter tryPresenter = new TryPresenter(mockView.Object); tryPresenter.SetMessage(new object(), new EventArgs()); // mockView.VerifySet(v => v.Message, Times.AtLeastOnce()); mockView.VerifySet(v => v.Model = It.Is<GenInfo>(x => x != null)); } } public class TryPresenter { private IView<GenInfo> view; public TryPresenter(IView<GenInfo> view) { this.view = view; } public void SetMessage(object sender, EventArgs e) { this.view.Model = null; } } public class MyView : IView<GenInfo> { #region Implementation of IView<GenInfo> public string ModuleId { get; set; } public GenInfo Model { get; set; } #endregion } public interface IView<T> { string ModuleId { get; set; } T Model { get; set; } } public class GenInfo { public String Message { get; set; } } } And if you change one line mockView.VerifySet(v = v.Model = It.Is(x = x != null)); to mockView.VerifySet(v = v.Model, Times.AtLeastOnce()); it works fine. I think Exception is incorrect.

    Read the article

  • WPF DataGrid binding to UserControl

    - by Trindaz
    I have a DataGrid with one column using a UserControl via a styled DataGridTemplateColumn. I can't seem to get the UserControl to 'see' the object that is in it's containing DataGridCell though. What kind of bindings can I create on the TextBox in my UserControl so that it can look up and see that object?! My UserControl and TemplateColumn Style are defined as: <Window.Resources> <local:UCTest x:Key="UCTest" /> <Style x:Key="TestStyle" TargetType="{x:Type WpfToolkit:DataGridCell}"> <Style.Setters> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type WpfToolkit:DataGridCell}"> <Grid Background="{Binding RelativeSource={RelativeSource TemplatedParent}, Converter={StaticResource drc}, Path=DataContext}"> <Grid.ColumnDefinitions> <ColumnDefinition/> <ColumnDefinition/> </Grid.ColumnDefinitions> <local:UCTest /> </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style.Setters> </Style> </Window.Resources> and my sample DataGrid is defined as: <WpfToolkit:DataGrid Name="dgSampleData" ItemsSource="{Binding}" AutoGenerateColumns="True" Margin="0,75,0,0"> <WpfToolkit:DataGrid.Columns> <WpfToolkit:DataGridTemplateColumn Header="Col2" CellStyle="{StaticResource TestStyle}" /> </WpfToolkit:DataGrid.Columns> </WpfToolkit:DataGrid> and my User Control is defined in a separate file as: <UserControl x:Class="UCTest" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:local="clr-namespace:WpfApplication1" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Width="104" Height="51"> <UserControl.Resources> <local:DataRowConverter x:Key="drc" /> </UserControl.Resources> <Grid> <TextBox Margin="12,12,-155,16" Name="TextBox1" Text="" /> </Grid> EDIT: My implementation of TestClass, which has the Test Property, which I want UCTest.TextBox1 to bind do: Public Class TestClass Private _Test As String = "Hello World Property!" Public Property Test() As String Get Return _Test End Get Set(ByVal value As String) _Test = value End Set End Property End Class Thanks in advance!

    Read the article

  • Doing TDD Silverlight 4 RC using Visual Studio 2010 RC

    - by user133992
    First I am glad to see better TDD support in VS2010. Support for generating code stubs from my tests is ok - not as good as more mature TDD plug-ins but a good start. I am looking for some best Silverlight 4.0 TDD practices. First Question: Anyone have links, recommendations? I know the new Silverlight Unit Test capabilities are much better (Jeff Wilcox's Mix Presentation). What I am focusing on right now is using TDD to develop pure Silverlight 4.0 Class Library projects - projects without a Silverlight UI project. I've been able to get it to work but not as cleanly as it should be. I can create an Empty VS project. Add A Silverlight 4 Class Library Project. Add a TestProject (not a silverlight Unit Test Project but a plain Test Project). Add a simple test in the Test Project such as: namespace Calculator.Test { [TestClass] public class CalculatorTests { [TestMethod] public void CalulatorAddTest() { Calc c = new Calc(); int expected = 10; int actual = c.Add(6, 4); Assert.AreEqual<int>(expected, actual); } } } Using the new Generate Type and Method from Test feature it will generate the following code in the Silverlight Project: namespace Calculator { public class Calc { public int Add(int p, int p_2) { throw new NotImplementedException(); } } } When I run the tests the first time it says the target assembly is Silverlight and not able to run test - Not exact text but the same general idea. When I change the implementation to: namespace Calculator { public class Calc { public int Add(int p, int p_2) { return p + p_2; } } } and re-run the test, it works fine and the test goes green. It also works for all other TDD code I generate after. I also get a warning Mark in the Test Project's reference to the Calculator Silverlight Class Library Assembly. Second Question: Any comments ideas if this just a bug in VS2010 RC or is Silverlight Class Library TDD not really supported. I have not created a Silverlight UI project or changed and build or debug settings so I have no idea what is hosting the silverlight DLL. Finally, some of the Silverlight Class Libraries I need to write will provide functionality that requires elevated Out-Of-Browser rights. Based on the above, it looks like I can use TDD Test Projects against regular Silverlight 4.0 Class Libraries, but I have no idea how I can TDD the elevated OOB functionality without also creating the UI component that gets installed. The UI piece is not really needed for the Library development and gets in the way of what I actually want to TDD. I know I can (and will) mock some of that functionality but at some point I will also need the real thing in my tests. Third Question: Any ideas how to TDD Silverlight 4.0 Class Library project that requires OOB elevated rights? Thanks!

    Read the article

  • MVVM pattern and nested view models - communication and lookup lists

    - by LostInWPF
    I am using Prism for a new application that I am creating. There are several lookup lists that will be used in several places in the application. Therefore it makes sense to define it once and use that everywhere I need that functionality. My current solution is to use typed data templates to render the controls inside a content control. <DataTemplate DataType={x:Type ListOfCountriesViewModel}> <ComboBox ItemsSource={Binding Countries} SelectedItem="{Binding SelectedCountry"/> </DataTemplate> <DataTemplate DataType={x:Type ListOfRegionsViewModel}> <ComboBox ItemsSource={Binding Countries} SelectedItem={Binding SelectedRegion} /> </DataTemplate> public class ParentViewModel { SelectedCountry get; set; SelectedRegion get; set; ListOfCountriesViewModel CountriesVM; ListOfRegionsViewModel RgnsVM; } Then in my window I have 2 content controls and the rest of the controls <ContentControl Content="{Binding CountriesVM}"></ContentControl> <ContentControl Content="{Binding RgnsVM}"></ContentControl> <Rest of controls on view> At the moment I have this working and the SelectedItems for the combo boxes are publising events via EventAggregator from the child view models which are then subscribed to in the parent view model. I am not sure that this is the best way to go as I can imagine I would end up with a lot of events very quickly and it will become unwieldy. Also if I was to use the same view model on another window it will publish the event and this parent viewmodel is subscribed to it which could have unintended consequences. My questions are :- Is this the best way to put lookup lists in a view which can be re-used across screens? How do I make it so that the combobox which is bound to the child viewmodel sets the relevant property on the parent viewmodel without using events / mediator. e.g in this case SelectedCountry for example? Any alternative implementation proposals for what I am trying to do? I have a feeling I am missing something obvious and there is so much info it is hard to know what is right so any help would be most gratefully received.

    Read the article

  • ActionScript/Flex ArrayCollection of Number objects to Java Collection<Long> using BlazeDS

    - by Justin
    Hello, I am using Flex 3 and make a call through a RemoteObject to a Java 1.6 method and exposed with BlazeDS and Spring 2.5.5 Integration over a SecureAMFChannel. The ActionScript is as follows (this code is an example of the real thing which is on a separate dev network); import com.adobe.cairngorm.business.ServiceLocator; import mx.collections.ArrayCollection; import mx.rpc.remoting.RemoteObject; import mx.rpc.IResponder; public class MyClass implements IResponder { private var service:RemoteObject = ServiceLocator.getInstance().getRemoteOjbect("mySerivce"); public MyClass() { [ArrayElementType("Number")] private var myArray:ArrayCollection; var id1:Number = 1; var id2:Number = 2; var id3:Number = 3; myArray = new ArrayCollection([id1, id2, id3]); getData(myArray); } public function getData(myArrayParam:ArrayCollection):void { var token:AsyncToken = service.getData(myArrayParam); token.addResponder(this.responder); //Assume responder implementation method exists and works } } This will make a call, once created to the service Java class which is exposed through BlazeDS (assume the mechanics work because they do for all other calls not involving Collection parameters). My Java service class looks like this; public class MySerivce { public Collection<DataObjectPOJO> getData(Collection<Long> myArrayParam) { //The following line is never executed and throws an exception for (Long l : myArrayParam) { System.out.println(l); } } } The exception that is thrown is a ClassCastException saying that a java.lang.Integer cannot be cast to a java.lang.Long. I worked around this issue by looping through the collection using Object instead, checking to see if it is an Integer, cast it to one, then do a .longValue() on it then add it to a temp ArraList. Yuk. The big problem is my application is supposed to handle records in the billions from a DB and the id will overflow the 2.147 billion limit of an integer. I would love to have BlazeDS or the JavaAdapter in it, translate the ActionScript Number to a Long as specified in the method. I hate that even though I use the generic the underlying element type of the collection is an Integer. If this was straight Java, it wouldn't compile. Any ideas are appreciated. Solutions are even better! :)

    Read the article

< Previous Page | 286 287 288 289 290 291 292 293 294 295 296 297  | Next Page >