Search Results

Search found 5405 results on 217 pages for 'minimal requirements'.

Page 164/217 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • Jquery tools Overlay CSS Conflict, Image positioned under the overlay

    - by Ami Mahloof
    First here's what I'm usingh and trying to do: the minimal setup for this effect: flowplayer.org/tools/demos/overlay/index.html then the Apple Leopard Preview Effect: flowplayer.org/tools/demos/overlay/apple.html Now here's the page I'm having the issue with http://gentle-mist-64.heroku.com/pictures My Issue: when I click on an image the picture shows under the overlay and to the right side, This has to be a conflict between my CSS positioning to the the plugin positioning. when I try this on a blank page with no layout, it works just fine. my layout css: body{ background: url('/images/background.jpg'); } #image_stage{ position: relative; top: 30px; margin: auto; margin-top: 75px; background-color: white; width: 900px; height: 520px; } #image_inside_stage { float: left; margin-top: 7px; margin-left: 27px; } #logo{ position: absolute; left: 725px; top: 4px; } #see_through_box { position: absolute; background-color: black; opacity: 0.66; -moz-opacity: 0.66; filter:alpha(opacity=66); width: 665px; height: 432px; margin: 45px; z-index: 99; -moz-border-radius-topleft: 15px; -moz-border-radius-topright: 0; -moz-border-radius-bottomleft: 0; -moz-border-radius-bottomright: 15px; -webkit-border-top-left-radius: 15px; -webkit-border-top-right-radius: 0; -webkit-border-bottom-left-radius: 0; -webkit-border-bottom-right-radius: 15px; } .inner_content{ position: absolute; top: 75px; left: 75px; z-index: 99; color: whitesmoke; } Anyone Please Help, I want this plugin to work, this is so much better then just a light box plugin, I have used the plugin acros my entire website and would like to keep on using it. I Appreciate any input Thanks Ami

    Read the article

  • Memory allocation problem with SVMs in OpenCV

    - by worksintheory
    Hi, I've been using OpenCV happily for a while, but now I have a problem which has bugged me for quite some time. The following code is reasonably minimal example of my problem: #include <cv.h> #include <ml.h> using namespace cv; int main(int argc, char **argv) { int sampleCountForTesting = 2731; //BROKEN: Breaks svm.train_auto(...) for values of 2731 or greater! Mat trainingData( sampleCountForTesting, 1, CV_32FC1, Scalar::all(0.0) ); Mat trainingResponses( sampleCountForTesting, 1, CV_32FC1, Scalar::all(0.0) ); for(int j = 0; j < 6; j++) { trainingData.at<float>( j, 0 ) = (float) (j%2); trainingResponses.at<float>( j, 0 ) = (float) (j%2); //Setting a few values so I don't get a "single class" error } CvSVMParams svmParams( 100, //100 is CvSVM::C_SVC, 2, //2 is CvSVM::RBF, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, NULL, TermCriteria( TermCriteria::MAX_ITER | TermCriteria::EPS, 2, 1.0 ) ); CvSVM svm = CvSVM(); svm.train_auto( trainingData, trainingResponses, Mat(), Mat(), svmParams ); return 0; } I just create matrices to hold the training data and responses, then set a few entries to some value other than zero, then run the SVM. But it breaks whenever there are 2731 rows or more: OpenCV Error: One of arguments' values is out of range (requested size is negative or too big) in cvMemStorageAlloc, file [omitted]/opencv/OpenCV-2.2.0/modules/core/src/datastructs.cpp, line 332 With fewer rows, it seems to be fine and a classifier trained in a similar manner to the above seems to be giving reasonable output. Am I doing something wrong? I'm pretty sure it's not actually anything to do with lack of memory, as I've got 6GB and also the code works fine when the data has 2730 rows and 10000 columns, which is a much bigger allocation. I'm running OpenCV 2.2 on OSX 10.6 and initially I thought the problem might be related to this bug if for some reason the fix wasn't included in the MacPorts version. Now I've also tried downloading the most recent stable version from the OpenCV site and building with cmake and using that, but I still get the same error, and the fix is definitely included in that version. Any help would be much appreciated! Thanks,

    Read the article

  • WCF Timeout issue - should there even be a socket connection?

    - by stiank81
    I have a .Net application which is split into a client and server side. The communication between them is handled using WCF. I'm not using the automagic service references, but instead I've built the connection manually like described in the Screencast by Miguel Castro. Summarized this means that I create a console application on the server side that holds ServiceHost objects for the different services: var myServiceHost = new System.ServiceModel.ServiceHost(typeof(MyService), new Uri("net.tcp://localhost:8002")); myServiceHost.Open(); And on the client side I have service proxies creating channels using the ChannelFactory: IMyService proxy = new ChannelFactory<IMyService>("MyServiceEndpoint").CreateChannel(); The client and server side share the service contract defined in the interface IMyService. And another advantage is that I get minimal App.config files - without all the autogenerated stuff created through the Service References. Example from client side: <?xml version="1.0"?> <configuration> <system.serviceModel> <client> <endpoint address="net.tcp://localhost:8002/MyEndpoint" binding="netTcpBinding" contract="IMyService" name="MyServiceEndpoint"/> </client> </system.serviceModel> </configuration> So - to my problem. I create the proxy once, and it holds a channel all the way through the application. However, if I leave the application without use for a few minutes the channel has timed out, and I get the following exception: The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '00:00:59.9979998'. How do I prevent this? I'm assuming I need to specify a higher timeout in my configuration? But I don't want it to ever time out. But on the other hand - I don't want a socket connection! Do I need one? Thought I could go connection less with WCF... What's the permanent solution and best practice on solving this? Set timeout to "never".. Create a new channel for each request? I'm assuming there is some overhead creating the channel?.. Increase the timeout to e.g. 5minutes and create new channel if the connection did timeout? Make it connection less somehow? (Without the overhead of creating channels..) Something else...

    Read the article

  • Does subTable support reRender?

    - by Tom
    Here is a minimal rich:dataTable with an a4j:commandLink inside. When clicked it sends an AJAX request to my bean and reRenders the dataTable. <rich:dataTable id="dataTable" value="#{carManager.all}" var="item"> <rich:column> <f:facet name="header">name</f:facet> <h:outputText value="#{item.name}" /> </rich:column> <rich:column> <f:facet name="header">action</f:facet> <a4j:commandLink reRender="dataTable" value="Delete" action="#{carForm.delete}"> <f:setPropertyActionListener value="#{item.id}" target="#{carForm.id}" /> <f:param name="from" value="list" /> </a4j:commandLink> </rich:column> </rich:dataTable> The exmaple obove works fine so far. But when I add a rich:subTable to the table, reRendering fails... <rich:dataTable id="dataTable" value="#{garageManager.all}" var="garage"> <f:facet name="header"> <rich:columnGroup> <rich:column>name</rich:column> <rich:column>action</rich:column> </rich:columnGroup> </f:facet> <rich:column colspan="2"> <h:outputText value="#{garage.name}" /> </rich:column> <rich:subTable value="#{garage.cars}" var="car"> <rich:column><h:ouputText value="#{car.name}" /></rich:column> <rich:column> <a4j:commandLink reRender="dataTable" value="Delete" action="#{carForm.delete}"> <f:setPropertyActionListener value="#{item.id}" target="#{carForm.id}" /> <f:param name="from" value="list" /> </a4j:commandLink> </rich:column> </rich:column> </rich:dataTable> Now the rich:dataTable is not rerendered but the item gets deleted since the item does not show up after a complete page refresh. Does subTable support reRender the way i'd like to use it here? Tanks Tom

    Read the article

  • Using commit monitors as a form of code review

    - by Jeff Dege
    I'm working in a small company - four developers, working on a variety of projects. We've been looking at what we can do as cost-effective methods of process improvement, and an idea came up. Given what we do, we often have single developers working on parts of a system, independently of the other developers. This can have a number of negative affects: A developer might not be fully aware of the context in which a change is being implemented, and make the change in a way that will meet the current customer's needs, but will break functionality that other customers depend on. A developer might make a change that breaks the current architectural design, introducing a dependency that will cause problems in future development. Other developers might not be aware of how the system has changed, in areas that they have not worked on. We've talked about doing code reviews, as a way of dealing with these issues. But we've not had much success when we tried. It takes a lot of time to prepare a change for a code review, and it takes everybody out of production while the review is being performed. And the benefits of any review we've tried has been minimal. We're using Subversion (with TortioseSVN) as our VCS. I've been looking at the SubVersion CommitMonitor tool, and wondering whether it might work as a sort of poor-man's code review. It lists every commit made on the repository, allowing someone to see the changes that have been made, the log messages made for that change, the files that were included in the change, and the specific lines in each file that were changed. Rather than scheduling a meeting, trying to get everybody together to review every change, we could just have every developer review every other developer's commits, at whatever time was convenient. This would keep every developer abreast of what changes were being made elsewhere in the system, and would have every change reviewed for customer conflicts and design consistency, at a fairly low cost. If someone saw a problem with the code that was being checked in, he could discuss it with the developer who did the commit, or more likely, schedule a meeting to discuss how the new feature could be implemented in a way that would not impact other users or screw up the architecture. Anyone else doing anything like this, using commit monitors for such a purpose?

    Read the article

  • Calling Save on a Configuration obj with a custom sectiongroup of type NameValueSectionHandler

    - by Allen
    I've got aConfiguration object that I can easily read and write settings from and call Save(ConfigurationSaveMode.Minimal, true) on and that typically works just fine. However, I now have a config file that has a custom sectionGroup defined of type System.Configuration.NameValueSectionHandler (I didn't write that part and I can't change it). Now when I call the Save method, it throws a TypeLoadException Message: Type System.Configuration.NameValueSectionHandler does not inherit from System.Configuration.ConfigurationSectionGroup. Callstack: at System.Configuration.TypeUtil.VerifyAssignableType (Type baseType, Type type, Boolean throwOnError) at System.Configuration.TypeUtil.GetConstructorWithReflectionPermission (Type type, Type baseType, Boolean throwOnError) at System.Configuration.MgmtConfigurationRecord.CreateSectionGroupFactory (FactoryRecord factoryRecord) at System.Configuration.MgmtConfigurationRecord.EnsureSectionGroupFactory (FactoryRecord factoryRecord) at System.Configuration.MgmtConfigurationRecord.GetSectionGroup (String configKey) at System.Configuration.ConfigurationSectionGroupCollection.Get (String name) at System.Configuration.ConfigurationSectionGroupCollection. <GetEnumerator>d__0.MoveNext() at System.Configuration.Configuration.ForceGroupsRecursive (ConfigurationSectionGroup group) at System.Configuration.Configuration.SaveAsImpl (String filename, ConfigurationSaveMode saveMode, Boolean forceSaveAll) at System.Configuration.Configuration.Save (ConfigurationSaveMode saveMode, Boolean forceSaveAll) at MyCompany.MyProduct.blah.blah.Save () in C:\\projects\\blah\\blah\\blah.cs:line blah For reference, the configuration goes like so <configuration> <configSections> ... normal sections here that work just fine, followed by ... <sectionGroup name="threadSettings" type="System.Configuration.NameValueSectionHandler"> <section name="T1" type="System.Configuration.DictionarySectionHandler"/> <section name="T2" type="System.Configuration.DictionarySectionHandler"/> <section name="T3" type="System.Configuration.DictionarySectionHandler"/> </sectionGroup> </configSections> <applicationSettings /> <threadSettings> <T1> <add key="key-here" value="value-here"/> </T1> ... T2 and T3 here as well, thats not interesting ... </threadSettings> </configuration> If I remove the actual sections and the definition for the custom section groups, I am able to read / write settings just fine and even call Save just fine. Obviously the custom section groups could be setup a little nicer but I can't fix that so I'm wondering: Is it possible to get the Configuration object to save if it has custom sectionGroups of the NameValueSectionHandler type

    Read the article

  • Contract developer trying to get outsourcing contract with current client.

    - by Mike
    I work for a major bank as a contract software developer. I've been there a few months, and without exception this place has the worst software practices I've ever seen. The software my team makes has no formal testing, terrible code (not reusable, hard to read, etc), minimal documentation, no defined development process and an absolutely sickening amount of waste due to bureaucratic overhead. Part of my contract is to maintain a group of thousands of very poorly written batch jobs. When one of the jobs fails (read: crashes), it's a developers job to look at the source, figure out what's wrong, fix it, and check it in. There is no quality assurance process or auditing of the results whatsoever. Once the developer says "it works" a manager signs off and it goes into production. What's disturbing is that these jobs essentially grab market data and put it into a third-party risk management system, which provides the bank with critical intelligence. I've discovered the disturbing truth that this has been happening since the 90s and nobody really has evidence the system is getting the correct data! Without going into details, an issue arose on Friday that was so horrible I actually stormed out of the place. I was ready to quit, but I decided to just get out to calm my nerves and possibly go back Monday. I've been reflecting today on how to handle this. I have realized that, in probably less than 6 months, I could (with 2 other developers) remake a large component of this system. The new system would provide them with, as primary benefits, a maintainable codebase less prone to error and a solid QA framework. To do it properly I would have to be outside the bank, the internal bureaucracy is just too much. And moreover, I think a bank is fundamentally not a place that can make good software. This is my plan. Write a report explaining in depth all the problems with their current system Explain why their software practices fail and generate a tremendous amount of error and waste. Use this as the basis for claiming the project must be developed externally. Write a high level development plan, including what resources I will require Hand 1,2,3 to my manager, hopefully he passes it up the chain. Worst case he fires me, but this isn't so bad. Convinced Executive decides to award my company a contract for the new system I have 8 years experience as a software contractor and have delivered my share of successful software products, but all working in-house for small/medium sized companies. When I read this over, I think I have a dynamite plan. But since this is the first time doing something this bold so I have my doubts. My question is, is this a good idea? If you think not, please spare no detail.

    Read the article

  • Does a nested subTable break reRender?

    - by Tom
    Here is a minimal rich:dataTable with an a4j:commandLink inside. When clicked it sends an AJAX request to my bean and reRenders the dataTable. <rich:dataTable id="dataTable" value="#{carManager.all}" var="item"> <rich:column> <f:facet name="header">name</f:facet> <h:outputText value="#{item.name}" /> </rich:column> <rich:column> <f:facet name="header">action</f:facet> <a4j:commandLink reRender="dataTable" value="Delete" action="#{carForm.delete}"> <f:setPropertyActionListener value="#{item.id}" target="#{carForm.id}" /> <f:param name="from" value="list" /> </a4j:commandLink> </rich:column> </rich:dataTable> The exmaple obove works fine so far. But when I add a rich:subTable to the table, reRendering fails... <rich:dataTable id="dataTable" value="#{garageManager.all}" var="garage"> <f:facet name="header"> <rich:columnGroup> <rich:column>name</rich:column> <rich:column>action</rich:column> </rich:columnGroup> </f:facet> <rich:column colspan="2"> <h:outputText value="#{garage.name}" /> </rich:column> <rich:subTable value="#{garage.cars}" var="car"> <rich:column><h:ouputText value="#{car.name}" /></rich:column> <rich:column> <a4j:commandLink reRender="dataTable" value="Delete" action="#{carForm.delete}"> <f:setPropertyActionListener value="#{item.id}" target="#{carForm.id}" /> <f:param name="from" value="list" /> </a4j:commandLink> </rich:column> </rich:column> </rich:dataTable> Now the rich:dataTable is not rerendered but the item gets deleted since the item does not show up after a complete page refresh. Why does subTable break support for reRender-ing the way i'd like to use it here? Tanks Tom

    Read the article

  • Mixing policy-based design with CRTP in C++

    - by Eitan
    I'm attempting to write a policy-based host class (i.e., a class that inherits from its template class), with a twist, where the policy class is also templated by the host class, so that it can access its types. One example where this might be useful is where a policy (used like a mixin, really), augments the host class with a polymorphic clone() method. Here's a minimal example of what I'm trying to do: template <template <class> class P> struct Host : public P<Host<P> > { typedef P<Host<P> > Base; typedef Host* HostPtr; Host(const Base& p) : Base(p) {} }; template <class H> struct Policy { typedef typename H::HostPtr Hptr; Hptr clone() const { return Hptr(new H((Hptr)this)); } }; Policy<Host<Policy> > p; Host<Policy> h(p); int main() { return 0; } This, unfortunately, fails to compile, in what seems to me like circular type dependency: try.cpp: In instantiation of ‘Host<Policy>’: try.cpp:10: instantiated from ‘Policy<Host<Policy> >’ try.cpp:16: instantiated from here try.cpp:2: error: invalid use of incomplete type ‘struct Policy<Host<Policy> >’ try.cpp:9: error: declaration of ‘struct Policy<Host<Policy> >’ try.cpp: In constructor ‘Host<P>::Host(const P<Host<P> >&) [with P = Policy]’: try.cpp:17: instantiated from here try.cpp:5: error: type ‘Policy<Host<Policy> >’ is not a direct base of ‘Host<Policy>’ If anyone can spot an obvious mistake, or has successfuly mixing CRTP in policies, I would appreciate any help.

    Read the article

  • How to maximize http.sys file upload performance

    - by anelson
    I'm building a tool that transfers very large streaming data sets (possibly on the order of terabytes in a single stream; routinely in the tens of gigabytes) from one server to another. The client portion of the tool will read blocks from the source disk, and send them over the network. The server side will read these blocks off the network and write them to a file on the server disk. Right now I'm trying to decide which transport to use. Options are raw TCP, and HTTP. I really, REALLY want to be able to use HTTP. The HttpListener (or WCF if I want to go that route) make it easy to plug in to the HTTP Server API (http.sys), and I can get things like authentication and SSL for free. The problem right now is performance. I wrote a simple test harness that sends 128K blocks of NULL bytes using the BeginWrite/EndWrite async I/O idiom, with async BeginRead/EndRead on the server side. I've modified this test harness so I can do this with either HTTP PUT operations via HttpWebRequest/HttpListener, or plain old socket writes using TcpClient/TcpListener. To rule out issues with network cards or network pathways, both the client and server are on one machine and communicate over localhost. On my 12-core Windows 2008 R2 test server, the TCP version of this test harness can push bytes at 450MB/s, with minimal CPU usage. On the same box, the HTTP version of the test harness runs between 130MB/s and 200MB/s depending upon how I tweak it. In both cases CPU usage is low, and the vast majority of what CPU usage there is is kernel time, so I'm pretty sure my usage of C# and the .NET runtime is not the bottleneck. The box has two 6-core Xeon X5650 processors, 24GB of single-ranked DDR3 RAM, and is used exclusively by me for my own performance testing. I already know about HTTP client tweaks like ServicePointManager.MaxServicePointIdleTime, ServicePointManager.DefaultConnectionLimit, ServicePointManager.Expect100Continue, and HttpWebRequest.AllowWriteStreamBuffering. Does anyone have any ideas for how I can get HTTP.sys performance beyond 200MB/s? Has anyone seen it perform this well on any environment?

    Read the article

  • Rush Hour - Solving the game

    - by Rubys
    Rush Hour if you're not familiar with it, the game consists of a collection of cars of varying sizes, set either horizontally or vertically, on a NxM grid that has a single exit. Each car can move forward/backward in the directions it's set in, as long as another car is not blocking it. You can never change the direction of a car. There is one special car, usually it's the red one. It's set in the same row that the exit is in, and the objective of the game is to find a series of moves (a move - moving a car N steps back or forward) that will allow the red car to drive out of the maze. I've been trying to think how to solve this problem computationally, and I can really not think of any good solution. I came up with a few: Backtracking. This is pretty simple - Recursion and some more recursion until you find the answer. However, each car can be moved a few different ways, and in each game state a few cars can be moved, and the resulting game tree will be HUGE. Some sort of constraint algorithm that will take into account what needs to be moved, and work recursively somehow. This is a very rough idea, but it is an idea. Graphs? Model the game states as a graph and apply some sort of variation on a coloring algorithm, to resolve dependencies? Again, this is a very rough idea. A friend suggested genetic algorithms. This is sort of possible but not easily. I can't think of a good way to make an evaluation function, and without that we've got nothing. So the question is - How to create a program that takes a grid and the vehicle layout, and outputs a series of steps needed to get the red car out? Sub-issues: Finding some solution. Finding an optimal solution (minimal number of moves) Evaluating how good a current state is Example: How can you move the cars in this setting, so that the red car can "exit" the maze through the exit on the right?

    Read the article

  • Is it normal for a programmer with 2 years experience to take a long time to code simple programs?

    - by ajax81
    Hi all, I'm a relatively new programmer (18 months on the scene), and I'm finally getting to the point where I'm comfortable accepting projects and developing solutions under minimal supervision. Unfortunately, this also means that I've become acutely aware of my performance shortfalls, the most prevalent of which is the amount of time it takes me to develop, test, and submit algorithms for review. A great example of what I'm talking about occurred this week when I was tasked with developing a simple XML web service (asp.net 3.5) callable via client-side JavaScript, that accepts a single parameter and returns a dataset output to a modal window (please note this is the first time I've had to develop a web service and have had ZERO experience creating/consuming them...let alone calling them from JS client side). Keeping a long story short -- I worked on it for 4 days straight, all day each day, for a grand total of 36 hours, not including the time I spent dwelling on the problem in the shower, the morning commute, and laying awake in bed at night. I learned a great deal about web services and xml/json/javascript...but was called in for a management review to discuss the length of time it took me to develop the solution. In the meeting, I was praised for the quality of my work and was in fact told that my effort was commendable. However, they (senior leads and pm's) weren't impressed with the amount of time it took me to develop the solution and expressed that they would have liked to see the solution in roughly 1/3 of the time it took me. I guess what concerns me the most is that I've identified this pattern as common for myself. Between online videos, book research, and trial/error coding...if its something I haven't seen before, I can spend up to two weeks on a problem that seems to only take the pros in the videos moments to code up. And of course, knowing that management isn't happy with this pattern has shaken me up a bit. To sum up, I have some very specific questions I'd like to ask, and would greatly appreciate your objective professional feedback. Is my experience as a junior programmer common among new developers? Or is it possible that I'm just not cut out for the work? If you suspect that my experience is not common and that there may be an aptitude issue, do you have any suggestions/solutions that I could propose to management to help bring me up to speed? Do seasoned, professional programmers ever encounter knowledge barriers that considerably delay deliverables? When you started out in the industry, did you know how to "do it all"? If not, how long did it take you to be perceived as "proficient"? Was it a natural progression of trial and error, or was there a particular zen moment when you knew you had achieved super saiyen power level? Anyways, thanks for taking the time to read my question(s). I don't know if this is the right place to ask for professional career guidance, but I greatly appreciate your willingness to help me out. Cheers, Daniel

    Read the article

  • Volunteer for a potential employer?

    - by EoRaptor013
    I've been looking for work since March, and haven't had much luck. Recently, however, I interviewed with a small company near my home for a C#, .NET, SQL development position. I hit it off very well with the hiring manager during the phone screen, and even more so during the face to face. Unfortunately, I failed the practical test: wiring up a web form, creating a couple of SQL stored procedures, saving new data with validation, and creating a minimal search screen. I knew what I was doing, but I was too slow to meet their standards as all the work needed to be done within an hour. Nevertheless, I really liked the place, the environment, the people who I would have been working with, and the boss. (I gave the company an 11 on Joel's 12 point scale.) So, the obvious next step was to scrape the rust off. I've been trying to create little projects for myself, but I don't know that I've been effective in getting any faster. What with all that goes into creating a project, I'm not heads-down coding as much as I think I need. Now, with all that introduction, here's the question. I have been thinking about calling the hiring manager at that place, and asking him to let me volunteer for three or four weeks, with no strings attached. I think it would benefit me, and wouldn't cost him anything (as long as I didn't slow the existing people down!). At the end of that period, he might, or might not, be inclined to hire me, but even if not, I would have had as much as 160 hours of in the trenches development. Maybe not all shiny, but no more rust, I would think. Does this plan make any sense at all? I certainly don't want to sound desperate (although, I'm not far from being there), and I very much need the tuneup, lube, and change the oil. What's the downside, if any, to me doing this? Do any of you see red flags going up—either from the prerspective of the hiring manager, or from the perspective of a developer?

    Read the article

  • Experienced developer trying to get outsourcing contract with current client.

    - by Mike
    I work for a major bank as a contract software developer. I've been there a few months, and without exception this place has the worst software practices I've ever seen. The software my team makes has no formal testing, terrible code (not reusable, hard to read, etc), minimal documentation, no defined development process and an absolutely sickening amount of waste due to bureaucratic overhead. Part of my contract is to maintain a group of thousands of very poorly written batch jobs. When one of the jobs fails (read: crashes), it's a developers job to look at the source, figure out what's wrong, fix it, and check it in. There is no quality assurance process or auditing of the results whatsoever. Once the developer says "it works" a manager signs off and it goes into production. What's disturbing is that these jobs essentially grab market data and put it into a third-party risk management system, which provides the bank with critical intelligence. I've discovered the disturbing truth that this has been happening since the 90s and nobody really has evidence the system is getting the correct data! Without going into details, an issue arose on Friday that was so horrible I actually stormed out of the place. I was ready to quit, but I decided to just get out to calm my nerves and possibly go back Monday. I've been reflecting today on how to handle this. I have realized that, in probably less than 6 months, I could (with 2 other developers) remake a large component of this system. The new system would provide them with, as primary benefits, a maintainable codebase less prone to error and a solid QA framework. To do it properly I would have to be outside the bank, the internal bureaucracy is just too much. And moreover, I think a bank is fundamentally not a place that can make good software. This is my plan. Write a report explaining in depth all the problems with their current system Explain why their software practices fail and generate a tremendous amount of error and waste. Use this as the basis for claiming the project must be developed externally. Write a high level development plan, including what resources I will require Hand 1,2,3 to my manager, hopefully he passes it up the chain. Worst case he fires me, but this isn't so bad. Convinced Executive decides to award my company a contract for the new system I have 8 years experience as a software contractor and have delivered my share of successful software products, but all working in-house for small/medium sized companies. When I read this over, I think I have a dynamite plan. But since this is the first time doing something this bold so I have my doubts. My question is, is this a good idea? If you think not, please spare no detail.

    Read the article

  • Why does a subTable break a4j:commandLink's reRender?

    - by Tom
    Here is a minimal rich:dataTable example with an a4j:commandLink inside. When clicked, it sends an AJAX request to my bean and reRenders the dataTable. <rich:dataTable id="dataTable" value="#{carManager.all}" var="item"> <rich:column> <f:facet name="header">name</f:facet> <h:outputText value="#{item.name}" /> </rich:column> <rich:column> <f:facet name="header">action</f:facet> <a4j:commandLink reRender="dataTable" value="Delete" action="#{carForm.delete}"> <f:setPropertyActionListener value="#{item.id}" target="#{carForm.id}" /> <f:param name="from" value="list" /> </a4j:commandLink> </rich:column> </rich:dataTable> The exmaple obove works fine so far. But when I add a rich:subTable (grouping the cars by garage for example) to the table, reRendering fails... <rich:dataTable id="dataTable" value="#{garageManager.all}" var="garage"> <f:facet name="header"> <rich:columnGroup> <rich:column>name</rich:column> <rich:column>action</rich:column> </rich:columnGroup> </f:facet> <rich:column colspan="2"> <h:outputText value="#{garage.name}" /> </rich:column> <rich:subTable value="#{garage.cars}" var="car"> <rich:column><h:ouputText value="#{car.name}" /></rich:column> <rich:column> <a4j:commandLink reRender="dataTable" value="Delete" action="#{carForm.delete}"> <f:setPropertyActionListener value="#{item.id}" target="#{carForm.id}" /> <f:param name="from" value="list" /> </a4j:commandLink> </rich:column> </rich:column> </rich:dataTable> Now the rich:dataTable is not rerendered but the item gets deleted since the item does not show up after a manual page refresh. Why does subTable break support for reRender-ing here? Tanks Tom

    Read the article

  • Subroutine & GoTo design

    - by sub
    I have a strange question concerning subroutines: As I'm creating a minimal language and I don't want to add high-level loops like while or for I was planning on just adding gotos to keep it Turing-Complete. Now I thought, eww - gotos - I wouldn't want to program in that language if I had to use gotos so often. So I thought about adding subroutines instead. I see the difference as the following: gotos Go to (captain obvious) a previously defined point and continue executing the program from there. Leads to hardly understandable and buggy code, I think that's a fact. subroutines Similiar: You define their starting point somewhere, as you call them the program jumps there - but the subroutine can go back to the point it was called from with return. Okay. Why didn't I just add the more function-like, nice looking subroutines? Because: In order to make return work if I call subroutines from within subroutines from within other subroutines, I'd have to use a stack containing the point where the currently running subroutine came from at top. That would then mean that I would, if I create loops using the subroutines, end up with an extremely memory-eating, overflowing stack with return locations. Not good. Don't think of my subroutines as functions. They are just gotos that return to the point they were called from, they don't actually give back values like the return x; statement in nearly all today's languages. Now to my actual questions: How can I solve the above problem with the stack overflow on loops with subroutines? Do I have to add a separate goto language construct without the return option? Assembler doesn't have loops but as I have seen myJumpPoint:, jnz, jz, retn. That means to me that there must also be a stack containing all the return locations. Am I right with that? What about long running loops then? Don't they overflow the stack/eat memory then? Am I getting the retn symbol in assembler totally wrong? If yes, please explain it to me.

    Read the article

  • XPathNavigator in Silverlight

    - by vladimir
    I have a code library that makes heavy use of XPathNavigator to parse some specific xml document. The xml document is cross-referenced, meaning that an element can reference another which has not yet been encountered during parsing: <ElementA ...> <DependentElementX id="1234"> </ElementA> <ElementX id="1234" .../> The document doesn't really look like this, but the point is that 1) there is an xml schema that enforces the overall document structure, 2) elements inside the document can reference each other using some IDs, and 3) there is quite a few such cross references between different elements in the document. The document is parsed in two phases. In the first pass I walk through the document XPathDocument doc = ...; XPathNavigator nav = doc.CreateNavigator(); nav.MoveToRoot(); nav.MoveToFirstChild()... and occasionally 'bookmark' the current position (element) in the document using XPathNavigator.Clone() method. This gives me a lightweight instance of an XPathNavigator which I can store somewhere and use later to jump back to a particular place (element) in my document. Once I have enough information collected in the first pass (for example, I have made sure there is indeed an ElementX with an id='1234'), I jump back to saved bookmarks (using those saved XPathNavigators) and complete the parsing. Well, now I'm about to use this library in Silverlight 3.0 and to my horror the XPathNavigator is not in the System.Xml assembly. Questions: 1) Am I missing something obvious (i.e. XPathNavigator does exist in some shape or form, for example in a toolkit or a freeware library)? 2) If I do have to make modifications in the code, what would be the best way to go? Ideally, I would like to make minimal changes, not to rewrite 80% of the code just to be able to use something like XLinq. To resume, in case I have to give up XPathNavigator, all I need is a way to bookmark places in my document and to get back to them so that I can continue to iterate from where I left off. Thanks in advance for any help/ideas.

    Read the article

  • PyOpenGL - passing transformation matrix into shader

    - by M-V
    I am having trouble passing projection and modelview matrices into the GLSL shader from my PyOpenGL code. My understanding is that OpenGL matrices are column major, but when I pass in projection and modelview matrices as shown, I don't see anything. I tried the transpose of the matrices, and it worked for the modelview matrix, but the projection matrix doesn't work either way. Here is the code: import OpenGL from OpenGL.GL import * from OpenGL.GL.shaders import * from OpenGL.GLU import * from OpenGL.GLUT import * from OpenGL.GLUT.freeglut import * from OpenGL.arrays import vbo import numpy, math, sys strVS = """ attribute vec3 aVert; uniform mat4 uMVMatrix; uniform mat4 uPMatrix; uniform vec4 uColor; varying vec4 vCol; void main() { // option #1 - fails gl_Position = uPMatrix * uMVMatrix * vec4(aVert, 1.0); // option #2 - works gl_Position = vec4(aVert, 1.0); // set color vCol = vec4(uColor.rgb, 1.0); } """ strFS = """ varying vec4 vCol; void main() { // use vertex color gl_FragColor = vCol; } """ # particle system class class Scene: # initialization def __init__(self): # create shader self.program = compileProgram(compileShader(strVS, GL_VERTEX_SHADER), compileShader(strFS, GL_FRAGMENT_SHADER)) glUseProgram(self.program) self.pMatrixUniform = glGetUniformLocation(self.program, 'uPMatrix') self.mvMatrixUniform = glGetUniformLocation(self.program, "uMVMatrix") self.colorU = glGetUniformLocation(self.program, "uColor") # attributes self.vertIndex = glGetAttribLocation(self.program, "aVert") # color self.col0 = [1.0, 1.0, 0.0, 1.0] # define quad vertices s = 0.2 quadV = [ -s, s, 0.0, -s, -s, 0.0, s, s, 0.0, s, s, 0.0, -s, -s, 0.0, s, -s, 0.0 ] # vertices self.vertexBuffer = glGenBuffers(1) glBindBuffer(GL_ARRAY_BUFFER, self.vertexBuffer) vertexData = numpy.array(quadV, numpy.float32) glBufferData(GL_ARRAY_BUFFER, 4*len(vertexData), vertexData, GL_STATIC_DRAW) # render def render(self, pMatrix, mvMatrix): # use shader glUseProgram(self.program) # set proj matrix glUniformMatrix4fv(self.pMatrixUniform, 1, GL_FALSE, pMatrix) # set modelview matrix glUniformMatrix4fv(self.mvMatrixUniform, 1, GL_FALSE, mvMatrix) # set color glUniform4fv(self.colorU, 1, self.col0) #enable arrays glEnableVertexAttribArray(self.vertIndex) # set buffers glBindBuffer(GL_ARRAY_BUFFER, self.vertexBuffer) glVertexAttribPointer(self.vertIndex, 3, GL_FLOAT, GL_FALSE, 0, None) # draw glDrawArrays(GL_TRIANGLES, 0, 6) # disable arrays glDisableVertexAttribArray(self.vertIndex) class Renderer: def __init__(self): pass def reshape(self, width, height): self.width = width self.height = height self.aspect = width/float(height) glViewport(0, 0, self.width, self.height) glEnable(GL_DEPTH_TEST) glDisable(GL_CULL_FACE) glClearColor(0.8, 0.8, 0.8,1.0) glutPostRedisplay() def keyPressed(self, *args): sys.exit() def draw(self): glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) # build projection matrix fov = math.radians(45.0) f = 1.0/math.tan(fov/2.0) zN, zF = (0.1, 100.0) a = self.aspect pMatrix = numpy.array([f/a, 0.0, 0.0, 0.0, 0.0, f, 0.0, 0.0, 0.0, 0.0, (zF+zN)/(zN-zF), -1.0, 0.0, 0.0, 2.0*zF*zN/(zN-zF), 0.0], numpy.float32) # modelview matrix mvMatrix = numpy.array([1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.5, 0.0, -5.0, 1.0], numpy.float32) # render self.scene.render(pMatrix, mvMatrix) # swap buffers glutSwapBuffers() def run(self): glutInitDisplayMode(GLUT_RGBA) glutInitWindowSize(400, 400) self.window = glutCreateWindow("Minimal") glutReshapeFunc(self.reshape) glutDisplayFunc(self.draw) glutKeyboardFunc(self.keyPressed) # Checks for key strokes self.scene = Scene() glutMainLoop() glutInit(sys.argv) prog = Renderer() prog.run() When I use option #2 in the shader without either matrix, I get the following output: What am I doing wrong?

    Read the article

  • Finding what makes strings unique in a list, can you improve on brute force?

    - by Ed Guiness
    Suppose I have a list of strings where each string is exactly 4 characters long and unique within the list. For each of these strings I want to identify the position of the characters within the string that make the string unique. So for a list of three strings abcd abcc bbcb For the first string I want to identify the character in 4th position d since d does not appear in the 4th position in any other string. For the second string I want to identify the character in 4th position c. For the third string it I want to identify the character in 1st position b AND the character in 4th position, also b. This could be concisely represented as abcd -> ...d abcc -> ...c bbcb -> b..b If you consider the same problem but with a list of binary numbers 0101 0011 1111 Then the result I want would be 0101 -> ..0. 0011 -> .0.. 1111 -> 1... Staying with the binary theme I can use XOR to identify which bits are unique within two binary numbers since 0101 ^ 0011 = 0110 which I can interpret as meaning that in this case the 2nd and 3rd bits (reading left to right) are unique between these two binary numbers. This technique might be a red herring unless somehow it can be extended to the larger list. A brute-force approach would be to look at each string in turn, and for each string to iterate through vertical slices of the remainder of the strings in the list. So for the list abcd abcc bbcb I would start with abcd and iterate through vertical slices of abcc bbcb where these vertical slices would be a | b | c | c b | b | c | b or in list form, "ab", "bb", "cc", "cb". This would result in four comparisons a : ab -> . (a is not unique) b : bb -> . (b is not unique) c : cc -> . (c is not unique) d : cb -> d (d is unique) or concisely abcd -> ...d Maybe it's wishful thinking, but I have a feeling that there should be an elegant and general solution that would apply to an arbitrarily large list of strings (or binary numbers). But if there is I haven't yet been able to see it. I hope to use this algorithm to to derive minimal signatures from a collection of unique images (bitmaps) in order to efficiently identify those images at a future time. If future efficiency wasn't a concern I would use a simple hash of each image. Can you improve on brute force?

    Read the article

  • web.config transforms not being applied on either publish or build installation package

    - by BenA
    Today I started playing with the web.config transforms in VS 2010. To begin with, I attempted the same hello world example that features in a lot of the blog posts on this topic - updating a connection string. I created the minimal example shown below (and similar to the one found in this blog). The problem is that whenever I do a right-click - "Publish", or a right-click - "Build Deployment Package" on the .csproj file, I'm not getting the correct output. Rather than a transformed web.config, I'm getting no web.config, and instead the two transform files are included. What am I doing wrong? Any help gratefully received! Web.config: <?xml version="1.0"?> <configuration xmlns="http://schemas.microsoft.com/.NetConfiguration/v2.0"> <connectionStrings> <add name="ConnectionString" connectionString="server=(local); initial catalog=myDB; user=xxxx;password=xxxx" providerName="System.Data.SqlClient"/> </connectionStrings> </configuration> Web.debug.config: <?xml version="1.0"?> <configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform"> <connectionStrings> <add name="ConnectionString" connectionString="server=DebugServer; initial catalog=myDB; user=xxxx;password=xxxx" providerName="System.Data.SqlClient" xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/> </connectionStrings> </configuration> Web.release.config: <?xml version="1.0"?> <configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform"> <connectionStrings> <add name="ConnectionString" connectionString="server=ReleaseServer; initial catalog=myDB; user=xxxx;password=xxxx" providerName="System.Data.SqlClient" xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/> </connectionStrings> </configuration>

    Read the article

  • Ideas for multiplatform encrypted java mobile storage system

    - by Fernando Miguélez
    Objective I am currently designing the API for a multiplatform storage system that would offer same interface and capabilities accross following supported mobile Java Platforms: J2ME. Minimum configuration/profile CLDC 1.1/MIDP 2.0 with support for some necessary JSRs (JSR-75 for file storage). Android. No minimum platform version decided yet, but rather likely could be API level 7. Blackberry. It would use the same base source of J2ME but taking advantage of some advaced capabilities of the platform. No minimum configuration decided yet (maybe 4.6 because of 64 KB limitation for RMS on 4.5). Basically the API would sport three kind of stores: Files. These would allow standard directory/file manipulation (read/write through streams, create, mkdir, etc.). Preferences. It is a special store that handles properties accessed through keys (Similar to plain old java properties file but supporting some improvements such as different value data types such as SharedPreferences on Android platform) Local Message Queues. This store would offer basic message queue functionality. Considerations Inspired on JSR-75, all types of stores would be accessed in an uniform way by means of an URL following RFC 1738 conventions, but with custom defined prefixes (i.e. "file://" for files, "prefs://" for preferences or "queue://" for message queues). The address would refer to a virtual location that would be mapped to a physical storage object by each mobile platform implementation. Only files would allow hierarchical storage (folders) and access to external extorage memory cards (by means of a unit name, the same way as in JSR-75, but that would not change regardless of underlying platform). The other types would only support flat storage. The system should also support a secure version of all basic types. The user would indicate it by prefixing "s" to the URL (i.e. "sfile://" instead of "file://"). The API would only require one PIN (introduced only once) to access any kind of secure object types. Implementation issues For the implementation of both plaintext and encrypted stores, I would use the functionality available on the underlying platforms: Files. These are available on all platforms (J2ME only with JSR-75, but it is mandatory for our needs). The abstract File to actual File mapping is straight except for addressing issues. RMS. This type of store available on J2ME (and Blackberry) platforms is convenient for Preferences and maybe Message Queues (though depending on performance or size requirements these could be implemented by means of normal files). SharedPreferences. This type of storage, only available on Android, would match Preferences needs. SQLite databases. This could be used for message queues on Android (and maybe Blackberry). When it comes to encryption some requirements should be met: To ease the implementation it will be carried out on read/write operations basis on streams (for files), RMS Records, SharedPreferences key-value pairs, SQLite database columns. Every underlying storage object should use the same encryption key. Handling of encrypted stores should be the same as the unencrypted counterpart. The only difference (from the user point of view) accessing an encrypted store would be the addressing. The user PIN provides access to any secure storage object, but the change of it would not require to decrypt/re-encrypt all the encrypted data. Cryptographic capabilities of underlying platform should be used whenever it is possible, so we would use: J2ME: SATSA-CRYPTO if it is available (not mandatory) or lightweight BoncyCastle cryptographic framework for J2ME. Blackberry: RIM Cryptographic API or BouncyCastle Android: JCE with integraced cryptographic provider (BouncyCastle?) Doubts Having reached this point I was struck by some doubts about what solution would be more convenient, taking into account the limitation of the plataforms. These are some of my doubts: Encryption Algorithm for data. Would AES-128 be strong and fast enough? What alternatives for such scenario would you suggest? Encryption Mode. I have read about the weakness of ECB encryption versus CBC, but in this case the first would have the advantage of random access to blocks, which is interesting for seek functionality on files. What type of encryption mode would you choose instead? Is stream encryption suitable for this case? Key generation. There could be one key generated for each storage object (file, RMS RecordStore, etc.) or just use one for all the objects of the same type. The first seems "safer", though it would require some extra space on device. In your opinion what would the trade-offs of each? Key storage. For this case using a standard JKS (or PKCS#12) KeyStore file could be suited to store encryption keys, but I could also define a smaller structure (encryption-transformation / key data / checksum) that could be attached to each storage store (i.e. using addition files with the same name and special extension for plain files or embedded inside other types of objects such as RMS Record Stores). What approach would you prefer? And when it comes to using a standard KeyStore with multiple-key generation (given this is your preference), would it be better to use a record-store per storage object or just a global KeyStore keeping all keys (i.e. using the URL identifier of abstract storage object as alias)? Master key. The use of a master key seems obvious. This key should be protected by user PIN (introduced only once) and would allow access to the rest of encryption keys (they would be encrypted by means of this master key). Changing the PIN would only require to reencrypt this key and not all the encrypted data. Where would you keep it taking into account that if this got lost all data would be no further accesible? What further considerations should I take into account? Platform cryptography support. Do SATSA-CRYPTO-enabled J2ME phones really take advantage of some dedicated hardware acceleration (or other advantage I have not foreseen) and would this approach be prefered (whenever possible) over just BouncyCastle implementation? For the same reason is RIM Cryptographic API worth the license cost over BouncyCastle? Any comments, critics, further considerations or different approaches are welcome.

    Read the article

  • ASP.Net 4.0 - Response required in SiteMap building?

    - by Nick Craver
    I'm running into an issue upgrading a project to .Net 4.0...and having trouble finding any reason for the issue (or at least, the change causing it). Given the freshness of 4.0, not a lot of blogs out there for issues yet, so I'm hoping someone here has an idea. Preface: this is a Web Forms application, coming from 3.5 SP1 to 4.0. In the Application_Start event we're iterating through the SiteMap and constructing routes based off data there (prettifying URLs mostly with some utility added), that part isn't failing though...or at least isn't not getting that far. It seems that calling the SiteMap.RootNode (inside application_start) causes 4.0 to eat it, since the XmlSiteMapProvider.GetNodeFromXmlNode method has been changed, looking in reflector you can see it's hitting HttpResponse.ApplyAppPathModifier here: str2 = HttpContext.Current.Response.ApplyAppPathModifier(str2); HttpResponse wasn't used at all in this method in the 2.0 CLR, so what we had worked fine, in 4.0 though, that method is called as a result of this stack: [HttpException (0x80004005): Response is not available in this context.] System.Web.XmlSiteMapProvider.GetNodeFromXmlNode(XmlNode xmlNode, Queue queue) System.Web.XmlSiteMapProvider.ConvertFromXmlNode(Queue queue) System.Web.XmlSiteMapProvider.BuildSiteMap() System.Web.XmlSiteMapProvider.get_RootNode() System.Web.SiteMap.get_RootNode() Since Response isn't available here in 4.0, we get an error. To reproduce this, you can narrow the test case down to this in global: protected void Application_Start(object sender, EventArgs e) { var s = SiteMap.RootNode; //Kaboom! //or just var r = Context.Response; //or var r = HttpContext.Current.Response; //all result in the same "not available" error } Question: Am I missing something obvious here? Or, is there another event added in 4.0 that's recommended for anything related to SiteMap on startup? For anyone curious/willing to help, I've created a very minimal project (a default VS 2010 ASP.Net 4.0 site, all the bells & whistles removed and only a blank sitemap and the Application_Start code added). It's a small 10kb zip available here: http://www.ncraver.com/Test/SiteMapTest.zip Update: Not a great solution, but the current work-around is to do the work in Application_BeginRequest, like this: private static bool routesRegistered = false; protected void Application_BeginRequest(object sender, EventArgs e) { if (!routesRegistered) { Application.Lock(); if (!routesRegistered) RouteManager.RegisterRoutes(RouteTable.Routes); routesRegistered = true; Application.UnLock(); } } I don't like this particularly, feels like an abuse of the event to bypass the issue. Does anyone have at least a better work-around, since the .Net 4 behavior with SiteMap is not likely to change?

    Read the article

  • JPA Entity Manager resource handling

    - by chiragshahkapadia
    Every time I call JPA method its creating entity and binding query. My persistence properties are: <property name="hibernate.dialect" value="org.hibernate.dialect.Oracle10gDialect"/> <property name="hibernate.cache.provider_class" value="net.sf.ehcache.hibernate.SingletonEhCacheProvider"/> <property name="hibernate.cache.use_second_level_cache" value="true"/> <property name="hibernate.cache.use_query_cache" value="true"/> And I am creating entity manager the way shown below: emf = Persistence.createEntityManagerFactory("pu"); em = emf.createEntityManager(); em = Persistence.createEntityManagerFactory("pu").createEntityManager(); Is there any nice way to manage entity manager resource instead create new every time or any property can set in persistence. Remember it's JPA. See below binding log every time : 15:35:15,527 INFO [AnnotationBinder] Binding entity from annotated class: * 15:35:15,527 INFO [QueryBinder] Binding Named query: * = * 15:35:15,527 INFO [QueryBinder] Binding Named query: * = * 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [EntityBinder] Bind entity com.* on table * 15:35:15,542 INFO [HibernateSearchEventListenerRegister] Unable to find org.hibernate.search.event.FullTextIndexEventListener on the classpath. Hibernate Search is not enabled. 15:35:15,542 INFO [NamingHelper] JNDI InitialContext properties:{} 15:35:15,542 INFO [DatasourceConnectionProvider] Using datasource: 15:35:15,542 INFO [SettingsFactory] RDBMS: and Real Application Testing options 15:35:15,542 INFO [SettingsFactory] JDBC driver: Oracle JDBC driver, version: 9.2.0.1.0 15:35:15,542 INFO [Dialect] Using dialect: org.hibernate.dialect.Oracle10gDialect 15:35:15,542 INFO [TransactionFactoryFactory] Transaction strategy: org.hibernate.transaction.JDBCTransactionFactory 15:35:15,542 INFO [TransactionManagerLookupFactory] No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recomm ended) 15:35:15,542 INFO [SettingsFactory] Automatic flush during beforeCompletion(): disabled 15:35:15,542 INFO [SettingsFactory] Automatic session close at end of transaction: disabled 15:35:15,542 INFO [SettingsFactory] JDBC batch size: 15 15:35:15,542 INFO [SettingsFactory] JDBC batch updates for versioned data: disabled 15:35:15,542 INFO [SettingsFactory] Scrollable result sets: enabled 15:35:15,542 INFO [SettingsFactory] JDBC3 getGeneratedKeys(): disabled 15:35:15,542 INFO [SettingsFactory] Connection release mode: auto 15:35:15,542 INFO [SettingsFactory] Default batch fetch size: 1 15:35:15,542 INFO [SettingsFactory] Generate SQL with comments: disabled 15:35:15,542 INFO [SettingsFactory] Order SQL updates by primary key: disabled 15:35:15,542 INFO [SettingsFactory] Order SQL inserts for batching: disabled 15:35:15,542 INFO [SettingsFactory] Query translator: org.hibernate.hql.ast.ASTQueryTranslatorFactory 15:35:15,542 INFO [ASTQueryTranslatorFactory] Using ASTQueryTranslatorFactory 15:35:15,542 INFO [SettingsFactory] Query language substitutions: {} 15:35:15,542 INFO [SettingsFactory] JPA-QL strict compliance: enabled 15:35:15,542 INFO [SettingsFactory] Second-level cache: enabled 15:35:15,542 INFO [SettingsFactory] Query cache: enabled 15:35:15,542 INFO [SettingsFactory] Cache region factory : org.hibernate.cache.impl.bridge.RegionFactoryCacheProviderBridge 15:35:15,542 INFO [RegionFactoryCacheProviderBridge] Cache provider: net.sf.ehcache.hibernate.SingletonEhCacheProvider 15:35:15,542 INFO [SettingsFactory] Optimize cache for minimal puts: disabled 15:35:15,542 INFO [SettingsFactory] Structured second-level cache entries: disabled 15:35:15,542 INFO [SettingsFactory] Query cache factory: org.hibernate.cache.StandardQueryCacheFactory 15:35:15,542 INFO [SettingsFactory] Statistics: disabled 15:35:15,542 INFO [SettingsFactory] Deleted entity synthetic identifier rollback: disabled 15:35:15,542 INFO [SettingsFactory] Default entity-mode: pojo 15:35:15,542 INFO [SettingsFactory] Named query checking : enabled 15:35:15,542 INFO [SessionFactoryImpl] building session factory 15:35:15,542 INFO [SessionFactoryObjectFactory] Not binding factory to JNDI, no JNDI name configured 15:35:15,542 INFO [UpdateTimestampsCache] starting update timestamps cache at region: org.hibernate.cache.UpdateTimestampsCache 15:35:15,542 INFO [StandardQueryCache] starting query cache at region: org.hibernate.cache.StandardQueryCache

    Read the article

  • Audio/video streaming on Windows platform

    - by bushtucker
    I'm building an interactive language learning application to be used in a classroom environment. The idea is that a teacher should be able to talk to the students (=audio stream to all students), let students talk to each other (= audio P2P) in groups of two or more, let students watch a video coming from a the DVD player or coming from a media server. It should be possible to save the audio/video streams. The teacher should also be able to monitor, take-over or block the desktop of the students. The platform is Windows and it's a desktop application, no web application. The audio delay should be as minimal as poosible. Optionally a student sitting at home should be supported, but it's not a high priority. I am now finished with the classroom control part of the application (login, monitor, block, ...) and want to start the audio and video part. I've been evaluating several options like DirectX, GStreamer and SIP but now I have to make a decision. DirectX seems an obvious choice for the Windows platform, but it only lets me capture and playback audio and video. The encoding/decoding/network part I should do myself. GStreamer contains all kinds of options to capture/encode/stream/save audio and video streams. I've experimented a bit with it (ossbuild) and it does seem to involve a lot of trial and error to make something work: - microphone capture (via directsoundsrc) produces cracking noises on some computers - rtpL16 payloader didn't work well - streaming raw audio over the network only working at a sampling rate of 8000, no higher - there are a lot of errors when receiving mpeg4 video (bad I-frame), on some computers worse than others It is my impression that gstreamer is primary targetted at linux platforms. Development and support for the Windows platform seems to be a little behind. Nevertheless it's a powerful framework that could save me months and years of work. SIP seems to be able to do everything I want, but it is targeted towards telephony and IM. I don't know how flexible SIP is. It seems to me that the SIP layer would just be overhead as I already have a central (teacher) application that can control and setup all the streams. The interesting parts of frameworks like opalvoip and freeswitch are the actual audio/video capture, the encoding and transmission. Does anyone know how these interesting parts relate a framework like gstreamer? Are they easy to integrate into a custom application? Are they flexible enough? Does anyone have experience with all or one of these technologies? Maybe there are even other options I can look at? Many thanks for your advice

    Read the article

  • getline() sets failbit and skips last line

    - by Thanatos
    I'm using std::getline() to enumerate through the lines in a file, and it's mostly working. It's left me curious however - std::getline() is skipping the very last line in my file, but only if it's blank. Using this minimal example: #include <iostream> #include <string> int main() { std::string line; while(std::getline(std::cin, line)) std::cout << "Line: “" << line << "”\n"; return 0; } If I feed it this: Line A Line B Line C I get those lines back at me. But this: Line A Line B Line C [* line is present but blank, ie, the file end is: "...B\nLine C\n" *] (I unfortunately can't have a blank line in SO's little code box thing...) So, first file has three lines ( ["Line A", "Line B", "Line C"] ), second file has four ( ["Line A", "Line B", "Line C", ""] ) This to me seems wrong - I have a four line file, and enumerating it with getline() leaves me with 3. What's really got me scratching my head is that this is exactly what the standard says it should do. (21.3.7.9) Even Python has similar behaviour (but it gives me the newlines too - C++ chops them off.) Is this some weird thing where C++ is expected lines to be terminated, and not separated by '\n', and I'm feeding it differently? Edit Clearly, I need to expand a bit here. I've met up with two philosophies of determining what a "line" in a file is: Lines are terminated by newlines - Dominant in systems such as Linux, and editors like vim. Possible to have a slightly "odd" file by not having a final '\n' (a "noeol" in vim). Impossible to have a blank line at the end of a file. Lines are separated by newlines - Dominant in just about every Windows editor I've ever come across. Every file is valid, and it's possible to have the last line be blank. Of course, YMMV as to what a newline is. I've always treated these as two completely different schools of thought. One earlier point I tried to make was to ask if the C++ standard was explicitly or merely implicitly following the first. (Curiously, where is Mac? terminated or separated?)

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >