Search Results

Search found 9025 results on 361 pages for 'quad core'.

Page 201/361 | < Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >

  • changing the serialization procedure for a graph of objects (.net framework)

    - by pierusch
    Hello I'm developing a scientific application using .net framework. The application depends heavily upon a large data structure (a tree like structure) that has been serialized using a standard binaryformatter object. The graph structure looks like this: <serializable()>Public class BigObjet inherits list(of smallObject) end class <serializable()>public class smallObject inherits list(of otherSmallerObjects) end class ... The binaryFormatter object does a nice job but it's not optimized at all and the entire data structure reaches around 100Mb on my filesystem. Deserialization works too but it's pretty slow (around 30seconds on my quad core). I've found a nice .dll on codeproject (see "optimizing serialization...") so I wrote a modified version of the classes above overriding the default serialization/deserialization procedure reaching very good results. The problem is this: I can't lose the data previosly serialized with the old version and I'd like to be able to use the new serialization/deserialization method. I have some ideas but I'm pretty sure someone will be able to give me a proper and better advice ! use an "helper" graph of objects who takes care of the entire serialization/deserialization procedure reading data from the old format and converting them into the classes I nedd. This could work but the binaryformatter "needs" to know the types being serialized so........ :( modify the "old" graph to include a modified version of serialization procedure...so I'll be able to deserialize old file and save them with the new format......this doesn't sound too good imho. well any help will be higly highly appreciated :)

    Read the article

  • XCode CoreData files not found/opening

    - by jAmi
    Hello, I am using the Xcode 3.1.4 that means SDK 3.1.2 The problem is that I cannot open .xcdatamodel (Core Data) file and I don't even get the Data Model option in Design Menu bar. When ever I double click and try to open the file, XCode gives me an error saying that it cannot find the file at (my project's path).Perhaps it was moved or deleted? , but the file exists at the same path. Please help me out as I haven't upgraded my system to Snow Leopard so cant use SDK 3.2 Regards jAmi

    Read the article

  • How do I preserve installed applications when migrating Ubuntu to another platform?

    - by michaeljoseph
    I'm looking at maybe moving from an older AMD64 to a new Intel dual-core which is 32 bit. Installation isn't a problem but can I transfer all the installed apps? I haven't been able to find anything so far on Google except where the migration is to a similar platform and file-system. I won't change the filesystem but the platform will be different. Is there something on the lines of the "World" file in Gentoo?

    Read the article

  • Creating multiple heads in remote repository

    - by Jab
    We are looking to move our team (~10 developers) from SVN to mercurial. We are trying to figure out how to manage our workflow. In particular, we are trying to see if creating remote heads is the right solution. We currently have a very large repository with multiple, related projects. They share a lot of code, but pieces of the project are deployed by different teams (3 teams) independent of other portions of the code-base. So each team is working on concurrent large features. The way we currently handles this in SVN are branches. Team1 has a branch for Feature1, same deal for the other teams. When Team1 finishes their change, it gets merged into the trunk and deployed out. The other teams follow suite when their project is complete, merging of course. So my initial thought are using Named Branches for these situations. Team1 makes a Feature1 branch off of the default branch in Hg. Now, here is the question. Should the team PUSH that branch, in it's current/half-state to the repository. This will create a second head in the core repo. My initial reaction was "NO!" as it seems like a bad idea. Handling multiple heads on our repository just sounds awful, but there are some advantages... First, the teams want to setup Continuous Integration to build this branch during their development cycle(months long). This will only work if the CI can pull this branch from the repo. This is something we do now with SVN, copy a CI build and change the branch. Easy. Second, it makes it easier for any team member to jump onto the branch and start working. Without pushing to the core repo, they would have to receive a push from a developer on that team with the changeset information. It is also possible to lose local commits to hardware failure. The chances increase a lot if it's a branch by a single developer who has followed the "don't push until finished" approach. And lastly is just for ease of use. The developers can easily just commit and push on their branch at any time without consequence(as they do today, in their SVN branches). Is there a better way to handle this scenario that I may be missing? I just want a veteran's opinion before moving forward with the strategy. For bug fixes we like the general workflow of mecurial, anonymous branches that only consist of 1-2 commits. The simplicity is great for those cases. By the way, I've read this , great article which seems to favor Named branches.

    Read the article

  • Problem adding eclair sources to Eclipse 3.5

    - by Oletros
    I have sync eclair sources using repo ~/eclair_sources/ In Eclipse I create a project using existent sources and I add the former folder and I have a lot of errors like thos: Description Resource Path Location Type android.R.attr cannot be resolved to a type SuggestionsAdapter.java /eclair/frameworks/base/core/java/android/app line 411 Java Problem _Original_Bitmap cannot be resolved to a type Bitmap.java /eclair/frameworks/base/tools/layoutlib/bridge/src/android/graphics line 27 Java Problem Any advise?

    Read the article

  • Javascript self contained sandbox events and client side stack

    - by amnon
    I'm in the process of moving a JSF heavy web application to a REST and mainly JS module application . I've watched "scalable javascript application architecture" by Nicholas Zakas on yui theater (excellent video) and implemented much of the talk with good success but i have some questions : I found the lecture a little confusing in regards to the relationship between modules and sandboxes , on one had to my understanding modules should not be effected by something happening outside of their sandbox and this is why they publish events via the sandbox (and not via the core as they do access the core for hiding base libary) but each module in the application gets a new sandbox ? , shouldn't the sandbox limit events to the modoules using it ? or should events be published cross page ? e.g. : if i have two editable tables but i want to contain each one in a different sandbox and it's events effect only the modules inside that sandbox something like messabe box per table which is a different module/widget how can i do that with sandbox per module , ofcourse i can prefix the events with the moduleid but that creates coupling that i want to avoid ... and i don't want to package modules toghter as one module per combination as i already have 6-7 modules ? while i can hide the base library for small things like id selector etc.. i would still like to use the base library for module dependencies and resource loading and use something like yui loader or dojo.require so in fact i'm hiding the base library but the modules themself are defined and loaded by the base library ... seems a little strange to me libraries don't return simple js objects but usualy wrap them e.g. : u can do something like $$('.classname').each(.. which cleans the code alot , it makes no sense to wrap the base and then in the module create a dependency for the base library by executing .each but not using those features makes a lot of code written which can be left out ... and implemnting that functionality is very bug prone does anyonen have any experience with building a front side stack of this order ? how easy is it to change a base library and/or have modules from different libraries , using yui datatable but doing form validation with dojo ... ? some what of a combination of 2+4 if u choose to do something like i said and load dojo form validation widgets for inputs via yui loader would that mean dojocore is a module and the form module is dependant on it ? Thanks .

    Read the article

  • How to reduce redundant code when adding new c++0x rvalue reference operator overloads

    - by Inverse
    I am adding new operator overloads to take advantage of c++0x rvalue references, and I feel like I'm producing a lot of redundant code. I have a class, tree, that holds a tree of algebraic operations on double values. Here is an example use case: tree x = 1.23; tree y = 8.19; tree z = (x + y)/67.31 - 3.15*y; ... std::cout << z; // prints "(1.23 + 8.19)/67.31 - 3.15*8.19" For each binary operation (like plus), each side can be either an lvalue tree, rvalue tree, or double. This results in 8 overloads for each binary operation: // core rvalue overloads for plus: tree operator +(const tree& a, const tree& b); tree operator +(const tree& a, tree&& b); tree operator +(tree&& a, const tree& b); tree operator +(tree&& a, tree&& b); // cast and forward cases: tree operator +(const tree& a, double b) { return a + tree(b); } tree operator +(double a, const tree& b) { return tree(a) + b; } tree operator +(tree&& a, double b) { return std::move(a) + tree(b); } tree operator +(double a, tree&& b) { return tree(a) + std::move(b); } // 8 more overloads for minus // 8 more overloads for multiply // 8 more overloads for divide // etc which also has to be repeated in a way for each binary operation (minus, multiply, divide, etc). As you can see, there are really only 4 functions I actually need to write; the other 4 can cast and forward to the core cases. Do you have any suggestions for reducing the size of this code? PS: The class is actually more complex than just a tree of doubles. Reducing copies does dramatically improve performance of my project. So, the rvalue overloads are worthwhile for me, even with the extra code. I have a suspicion that there might be a way to template away the "cast and forward" cases above, but I can't seem to think of anything.

    Read the article

  • In mercurial, is there a way to disable ALL configurations (system, user, repo)?

    - by Geoffrey Zheng
    On any non-trivial hg installation, the hgrc's tend to contain significant stuff. Is there a way to completely ignore/bypass ALL configurations, from system, user, to repo-level? The use case is to use some hg core functionalities in some automation scripts. Currently, if anything is misconfigured (and I mess with my ~/.hgrc a lot), the scripts will abort for something it doesn't use at all. It'd be perfect is I can just hg <whatever> --config:none.

    Read the article

  • Find out why Xcode has decided to link to a particular library

    - by andygeers
    I'm using the Unity 3D engine to build an iPhone app, and when I go to generate my Xcode project for compilation, it includes a few fairly large libraries: Mono.Security.dll.s, System.dll.s, System.Core.dll.s, etc. I don't know if this question is really an Xcode question or a Unity question, but I'm trying to figure out why each of those libraries is being linked - which functions / classes are being referenced - ideally so that I can rewrite my code to remove as many of the dependencies as possible. Does anybody know a way to find this information out?

    Read the article

  • Memory randomization as application security enhancement?

    - by Paul Sasik
    I recently came upon a Microsoft article that touted new "defensive enhancements" of Windows 7. Specifically: Address space layout randomization (ASLR) Heap randomization Stack randomization The article went on to say that "...some of these defenses are in the core operating system, and the Microsoft Visual C++ compiler offers others" but didn't explain how these strategies would actually increase security. Anyone know why memory randomization increases security, if at all? Do other platforms and compilers employ similar strategies?

    Read the article

  • Should I invest time in learning Java language these days? (question from a greenhorn)

    - by dave-keiture
    Hi experts, Assuming you've already had a chance to look through the lambda syntax proposed for Java7 (and the other things that have happened with Java, after Oracle has bought Sun + obvious problems in Java Community Process), what do you think is the future of Java language? Should I, as a Java greenhorn, invest time in learning Java language (not talking about the core JVM, which definitely will survive anything, and worth investments), or concentrate on Scala, Groovy, or other hybrid languages on the JVM platform (I've came into Java world from PHP/Ruby). Thanks in advance.

    Read the article

  • What database strategy to choose for a large web application

    - by Snoopy
    I have to rewrite a large database application, running on 32 servers. The hardware is up to date, each machine has two quad core Xeon and 32 GByte RAM. The database is multi-tenant, each customer has his own file, around 5 to 10 GByte each. I run around 50 databases on this hardware. The app is open to the web, so I have no control on the load. There are no really complex queries, so SQL is not required if there is a better solution. The databases get updated via FTP every day at midnight. The database is read-only. C# is my favourite language and I want to use ASP.NET MVC. I thought about the following options: Use two big SQL servers running SQL Server 2012 to serve the 32 servers with data. On the 32 servers running IIS hosting providing REST services. Denormalize the database and use Redis on each webserver. Use booksleeve as a Redis client. Use a combination of SQL Server and Redis Use SQL Server 2012 together with Hadoop Use Hadoop without SQL Server What is the best way for a read-only database, to get the best performance without loosing maintainability? Does Map-Reduce make sense at all in such a scenario? The reason for the rewrite is, the old app written in C++ with ISAM technology is too slow, the interfaces are old fashioned and not nice to use from an website, especially when using ajax. The app uses a relational datamodel with many tables, but it is possible to write one accerlerator table where all queries can be performed on, and all other information from the other tables are possible by a simple key lookup.

    Read the article

  • PHP function to know upload_max_filesize

    - by Marc
    I'v searching for a while in php.net and I don't find what I'm searching. I need a function to know the max_upload_filesize from a PHP function. Here what I need: http://www.php.net/manual/en/ini.core.php#ini.upload-max-filesize Thanks in advance!

    Read the article

  • Dmbedded couchDB

    - by Chang
    CouchDB is great, I like its replication functionality, but it's a bit larger and slower when used in desktop application. As I tested in intel duo core cpu, 12 seconds to load 10000 docs 10 seconds to insert 10000 doc, but need 20 seconds to update view, so total is 30 seconds Is there any No SQL implementation which has the same replication functionality, but the size is very small, and the speed is quite good( 1 second to load 10000 docs). Thanks

    Read the article

  • Target Framework does not change in Visual Studio 2010

    - by Adam Driscoll
    When I change the target framework of any project in Visual Studio 2010 it does not actually change the System assembly references. For example if I target v2.0 and check the properties of System and System.Data I can see that they are still both v4.0. If i change the target to v3.5, System stays at v4.0 but System.Core changes to v3.5. Because of this I am truly not targeting anything except v4.0.

    Read the article

  • Magentoo add slashes to file_name

    - by Kein
    I try extend an core class. But catch error: Warning: include( \\\\\\\\\\\\MyModule\Ajaxsearch\Model\Resource\Eav\Mysql4\Product\Collection \\\\\\\\\\.php) [function.include]: failed to open stream:... Why magento add slashes to addres? May be config error?

    Read the article

  • Syncing Online Content with iPhone Application

    - by PF1
    Hi Everyone: I am looking for some way to sync a online XML file with my iPhone application and only download the newest changed items. Each item is marked with a date attribute, so I assume this is possible. I have heard that Core Data can accomplish this task, but I am unsure of the suggested method and how to approach implementing it. Thanks for any help.

    Read the article

  • Invoke wso2 admin services SOAPUI

    - by NGoyal
    I m working on wso2 admin services. I get url as http://localhost:9763/services/AuthenticationAdmin?wsdl for AuthencticationAdmin. Now, when I hit the login operation, with admin,admin,127.0.0.1, I get true as return. ESB console shows logged in. Now, when I hit logout operation, I dont get any response. Also I notice that header of the response does not contain any session ID. 0down voteaccept My ESB is 4.6.0. login request : <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:aut="http://authentication.services.core.carbon.wso2.org"> <soapenv:Header/> <soapenv:Body> <aut:login> <!--Optional:--> <aut:username>admin</aut:username> <!--Optional:--> <aut:password>admin</aut:password> <!--Optional:--> <aut:remoteAddress>127.0.0.1</aut:remoteAddress> </aut:login> </soapenv:Body> </soapenv:Envelope> login response <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"> <soapenv:Body> <ns:loginResponse xmlns:ns="http://authentication.services.core.carbon.wso2.org"> <ns:return>true</ns:return> </ns:loginResponse> </soapenv:Body> </soapenv:Envelope> In the response, when I hit login I see, at bottom I only get 6 elements in header as follows : > Date Tue, 25 Jun 2013 14:31:42 GMT > > Transfer-Encoding chunked > > #status# HTTP/1.1 200 OK > > Content-Type text/xml; charset=UTF-8 > > Connection Keep-Alive > > Server WSO2-PassThrough-HTTP Now, I dont get session ID. Can you please point out where m I going wrong? My scenario is that I want to login to WSO2 and then hit some other admin service operation.

    Read the article

  • GWT: Javascript implementation of JRE classes

    - by chris_l
    Sometimes I'd like to take a peek into the implementation of the JRE classes, which is used to generate the JavaScript code. For some classes, I can find a corresponding implementation by guessing its name, e.g. com.google.gwt.core.client.impl.StringBuilderImpl. But where's the implementation for java.util.Date for example? Where do I find it, and how does GWT find it (via some configuration file?)

    Read the article

  • Generate A Simple Read-Only DAL?

    - by David
    I've been looking around for a simple solution to this, trying my best to lean towards something like NHibernate, but so far everything I've found seems to be trying to solve a slightly different problem. Here's what I'm looking at in my current project: We have an IBM iSeries database as a primary repository for a third party software suite used for our core business (a financial institution). Part of what my team does is write applications that report on or key off of a lot of this data in some way. In the past, we've been manually creating ADO .NET connections (we're using .NET 3.5 and Visual Studio 2008, by the way) and manually writing queries, etc. Moving forward, I'd like to simplify the process of getting data from there for the development team. Rather than creating connections and queries and all that each time, I'd much rather a developer be able to simply do something like this: var something = (from t in TableName select t); And, ideally, they'd just get some IQueryable or IEnumerable of generated entities. This would be done inside a new domain core that I'm building where these entities would live and the applications would interface with it through a request/response service layer. A few things to note are: The entities that correspond to the database tables should be generated once and we'd prefer to manually keep them updated over time. That is, if columns/tables are added to the database then we shouldn't have to do anything. (If some are deleted, of course, it will break, but that's fine.) But if we need to use a new column, we should be able to just add it to the necessary class(es) without having to re-gen the whole thing. The whole thing should be SELECT-only. We're not doing a full DAL here because we don't want to be able to break anything in the database (even accidentally). We don't need any kind of mapping between our domain objects and the generated entity types. The domain barely covers a fraction of the data that's in there, most of it we'll never need, and we would rather just create re-usable maps manually over time. I already have a logical separation for the DAL where my "repository" classes return domain objects, I'm just looking for a better alternative to manual ADO to be used inside the repository classes. Any suggestions? It seems like what I'm doing is just enough outside the normal demand for DAL/ORM tools/tutorials online that I haven't been able to find anything. Or maybe I'm just overlooking something obvious?

    Read the article

< Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >