Search Results

Search found 14719 results on 589 pages for 'optimization level'.

Page 511/589 | < Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >

  • Java: Typecasting to Generics

    - by bguiz
    This method that uses method-level generics, that parses the values from a custom POJO, JXlistOfKeyValuePairs (which is exactly that). The only thing is that both the keys and values in JXlistOfKeyValuePairs are Strings. This method wants to taken in, in addition to the JXlistOfKeyValuePairs instance, a Class<T> that defines which data type to convert the values to (assume that only Boolean, Integer and Float are possible). It then outputs a HashMap with the specified type for the values in its entries. This is the code that I have got, and it is obviously broken. private <T extends Object> Map<String, T> fromListOfKeyValuePairs(JXlistOfKeyValuePairs jxval, Class<T> clasz) { Map<String, T> val = new HashMap<String, T>(); List<Entry> jxents = jxval.getEntry(); T value; String str; for (Entry jxent : jxents) { str = jxent.getValue(); value = null; if (clasz.isAssignableFrom(Boolean.class)) { value = (T)(Boolean.parseBoolean(str)); } else if (clasz.isAssignableFrom(Integer.class)) { value = (T)(Integer.parseInt(str)); } else if (clasz.isAssignableFrom(Float.class)) { value = (T)(Float.parseFloat(str)); } else { logger.warn("Unsupported value type encountered in key-value pairs, continuing anyway: " + clasz.getName()); } val.put(jxent.getKey(), value); } return val; } This is the bit that I want to solve: if (clasz.isAssignableFrom(Boolean.class)) { value = (T)(Boolean.parseBoolean(str)); } else if (clasz.isAssignableFrom(Integer.class)) { value = (T)(Integer.parseInt(str)); } I get: Inconvertible types required: T found: Boolean Also, if possible, I would like to be able to do this with more elegant code, avoiding Class#isAssignableFrom. Any suggestions? Sample method invocation: Map<String, Boolean> foo = fromListOfKeyValuePairs(bar, Boolean.class);

    Read the article

  • iphone viewWillAppear not firing

    - by chzk
    I've read numerous posts about people having problems with viewWillAppear when you do not create your view heirarchy JUST right. My problem is I can't figure out what that means. If I create a RootViewController and call addSubView on that controller, I would expect the added view(s) to be wired up for viewWillAppear events. Does anyone have an example of a complex programmatic view heirarchy that successfully recieves viewWillAppear events at every level? Apple Docs state: Warning: If the view belonging to a view controller is added to a view hierarchy directly, the view controller will not receive this message. If you insert or add a view to the view hierarchy, and it has a view controller, you should send the associated view controller this message directly. Failing to send the view controller this message will prevent any associated animation from being displayed. The problem is that they don't describe how to do this. What the hell does "directly" mean. How do you "indirectly" add a view. I am fairly new to Cocoa and iPhone so it would be nice if there were useful examples from Apple besides the basic Hello World crap. Any help is greatly appreciated...

    Read the article

  • Valid javascript object property names

    - by hawkettc
    I'm trying to work out what is considered valid for the property name of a javascript object. For example var b = {} b['-^colour'] = "blue"; // Works fine in Firefox, Chrome, Safari b['colour'] = "green"; // Ditto alert(b['-^colour']); // Ditto alert(b.colour); // Ditto for(prop in b) alert(prop); // Ditto //alert(b.-^colour); // Fails (expected) This post details valid javascript variable names, and '-^colour' is clearly not valid (as a variable name). Does the same apply to object property names? Looking at the above I'm trying to work out if b['-^colour'] is invalid, but works in all browsers by quirk, and I shouldn't trust it to work going forward b['-^colour'] is completely valid, but it's just of a form that can only be accessed in this manner - (it's supported so Objects can be used as maps perhaps?) Something else As an aside, a global variable in javascript might be declared at the top level as var abc = 0; but could also be created (as I understand it) with window['abc'] = 0; the following works in all the above browsers window['@£$%'] = "bling!"; alert(window['@£$%']); Is this valid? It seems to contradict the variable naming rules - or am I not declaring a variable there? What's the difference between a variable and an object property name? Cheers, Colin

    Read the article

  • Continuous build infrastructure recommendations for primarily C++; GreenHills Integrity

    - by andersoj
    I need your recommendations for continuous build products for a large (1-2MLOC) software development project. Characteristics: ClearCase revision control Approx 80% C++; 15% Java; 5% script or low-level Compiles for Green Hills Integrity OS, but also some windows and JVM chunks Mostly an embedded system; also includes some UI pieces and some development support (simulation tools, config tools, etc...) Each notional "version" of the deliverable includes deployment images for a number of boards, UI machines, etc... (~10 separate images; 5 distinct operating systems) Need to maintain/track many simultaneous versions which, notably, are built for a variety of different board support packages Build cycle time is a major issue on the project, need support for whatever features help address this (mostly need to manage a large farm of build machines, I guess..) Operates in a secure environment (this is a gov't program) (Edited to add: This is a classified program; outsourcing the build infrastructure is a non-starter.) Interested in any best practices or peripheral guidance you might offer. The build automation issues is one of several overlapping best practices that appear to be missing on the program, but try to keep your answers focused on build infrastructure piece and observations directly related. Cost is not an object. Scalability and ease of retrofitting onto an existing infrastructure are key. JA

    Read the article

  • I cannot make log4net work in my web application :(

    - by vtortola
    Hi, I'm trying to set up log4net but I cannot make it work. I've put this in my Web.config: <configSections> <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" /> <appender name="RollingFileAppender" type="log4net.Appender.RollingFileAppender"> <file value="logfile.log" /> <appendToFile value="true" /> <rollingStyle value="Composite" /> <maxSizeRollBackups value="14" /> <maximumFileSize value="15000KB" /> <datePattern value="yyyyMMdd" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%date [%thread] %-5level %logger [%property{NDC}] - %message%newline" /> </layout> </appender> <root> <level value="DEBUG" /> <appender-ref ref="RollingFileAppender" /> <appender-ref ref="TraceAppender" /> </root> (StackOverflow is not rendering correctly the code I've pasted, I don't know why) Then, in my code I execute: log4net.Config.XmlConfigurator.Configure(new FileInfo(HttpContext.Current.Server.MapPath("~/Web.config"))); ILog log = LogManager.GetLogger("MainLogger"); if(log.IsDebugEnabled) log.Debug("lalala"); But nothing happen. I check the "log" variable, and contains an LogImpl object, that has all the logging levels enabled. I get no error or configuration warning, I cannot see any file in the root, in the bin or anywhere. What do I have to do to make it work? Cheers.

    Read the article

  • File upload fails when user is authenticated. Using IIS7 Integrated mode.

    - by Nikkelmann
    These are the user identities my website tells me that it uses: Logged on: NT AUTHORITY\NETWORK SERVICE (Can not write any files at all) and Not logged on: WSW32\IUSR_77 (Can write files to any folder) I have a ASP.NET 4.0 website on a shared hosting IIS7 web server running in Integrated mode with 32-bit applications support enabled and MSSQL 2008. Using classic mode is not an option since I need to secure some static files and I use Routing. In my web.config file I have set the following: <system.webServer> <modules runAllManagedModulesForAllRequests="true" /> </system.webServer> My hosting company says that Impersonation is enabled by default on machine level, so this is not something I can change. I asked their support and they referred me to this article: http://www.codinghub.net/2010/08/differences-between-integrated-mode-and.html Citing this part: Different windows identity in Forms authentication When Forms Authentication is used by an application and anonymous access is allowed, the Integrated mode identity differs from the Classic mode identity in the following ways: * ServerVariables["LOGON_USER"] is filled. * Request.LogognUserIdentity uses the credentials of the [NT AUTHORITY\NETWORK SERVICE] account instead of the [NT AUTHORITY\INTERNET USER] account. This behavior occurs because authentication is performed in a single stage in Integrated mode. Conversely, in Classic mode, authentication occurs first with IIS 7.0 using anonymous access, and then with ASP.NET using Forms authentication. Thus, the result of the authentication is always a single user-- the Forms authentication user. AUTH_USER/LOGON_USER returns this same user because the Forms authentication user credentials are synchronized between IIS 7.0 and ASP.NET. A side effect is that LOGON_USER, HttpRequest.LogonUserIdentity, and impersonation no longer can access the Anonymous user credentials that IIS 7.0 would have authenticated by using Classic mode. How do I set up my website so that it can use the proper identity with the proper permissions? I've looked high and low for any answers regarding this specific problem, but found nil so far... I hope you can help!

    Read the article

  • Flex Tree with infinite parents and children

    - by Tempname
    I am working on a tree component and I am having a bit of the issue with populating the data-provider for this tree. The data that I get back from my database is a simple array of value objects. Each value object has 2 properties. ObjectID and ParentID. For parents the ParentID is null and for children the ParentID is the ObjectID of the parent. Any help with this is greatly appreciated. Essentially the tree should look something like this: Parent1 Child1 Child1 Child2 Child1 Child2 Parent2 Child1 Child2 Child3 Child1 This is the current code that I am testing with: public function setDataProvider(data:Array):void { var tree:Array = new Array(); for(var i:Number = 0; i < data.length; i++) { // do the top level array if(!data[i].parentID) { tree.push(data[i], getChildren(data[i].objectID, data)); } } function getChildren(objectID:Number, data:Array):Array { var childArr:Array = new Array(); for(var k:Number = 0; k < data.length; k++) { if(data[k].parentID == objectID) { childArr.push(data[k]); //getChildren(data[k].objectID, data); } } return childArr; } trace(ObjectUtil.toString(tree)); } Here is a cross section of my data: ObjectID ParentID 1 NULL 10 NULL 8 NULL 6 NULL 4 6 3 6 9 6 2 6 11 7 7 8 5 8

    Read the article

  • Using Relative Paths to Load Resources in Cocoa/C++

    - by moka
    I am currently working directly with Cocoa for the first time to built a screen saver. Now I came across a problem when trying to load resources from within the .saver bundle. I basically have a small C++ wrapper class to load .exr files using freeImage. This works as long as I use absoulte paths, but that's not very useful, is it? So, basically, I tried everything: putting the .exr file at the level of the .saver bundle itself, inside the bundles Resources folder, and so on. Then I simply tried to load the .exr like this, but without success: particleTex = [self loadExrTexture:@"ball.exr"]; I also tried making it go to the .saver bundles location like this: particleTex = [self loadExrTexture:@"../../../ball.exr"]; ...to maybe load the .exr from that location, but without success. I then came across this: NSString * path = [[NSBundle mainBundle] pathForResource:@"ball" ofType:@"exr"]; const char * pChar = [path UTF8String]; ...which seems to be a common way to find resources in Cocoa, but for some reason it's empty in my case. Any ideas about that? I really tried out anything that came to my mind without success so I would be glad about some input!

    Read the article

  • RegEx expression or jQuery selector to NOT match "external" links in href

    - by TrueBlueAussie
    I have a jQuery plugin that overrides link behavior, to allow Ajax loading of page content. Simple enough with a delegated event like $(document).on('click','a', function(){});. but I only want it to apply to links that are not like these ones (Ajax loading is not applicable to them, so links like these need to behave normally): target="_blank" // New browser window href="#..." // Bookmark link (page is already loaded). href="afs://..." // AFS file access. href="cid://..." // Content identifiers for MIME body part. href="file://..." // Specifies the address of a file from the locally accessible drive. href="ftp://..." // Uses Internet File Transfer Protocol (FTP) to retrieve a file. href="http://..." // The most commonly used access method. href="https://..." // Provide some level of security of transmission href="mailto://..." // Opens an email program. href="mid://..." // The message identifier for email. href="news://..." // Usenet newsgroup. href="x-exec://..." // Executable program. href="http://AnythingNotHere.com" // External links Sample code: $(document).on('click', 'a:not([target="_blank"])', function(){ var $this = $(this); if ('some additional check of href'){ // Do ajax load and stop default behaviour return false; } // allow link to work normally }); Q: Is there a way to easily detect all "local links" that would only navigate within the current website? excluding all the variations mentioned above. Note: This is for an MVC 5 Razor website, so absolute site URLs are unlikely to occur.

    Read the article

  • Visual Studio 2008 linker wants all symbols to be resolved, not only used ones

    - by user343011
    We recently upgraded to Visual Studio 2008 from 2005, and I think those error started after that. In our solution, we have a multitude of projects. Many of those are utility projects, or projects containing core functionality used by other projects. The output of those is lib files that are linked to when building the projects generating the final binaries using the "Project dependencies..." option. One of the other projects---Let us call it ResultLib---generates a DLL, and it needs one single function from the core project. This function uses only static function from its own source file, but the project in its entirety uses a lot of low-level Windows functions and also imports a DLL---Let us call it Driver.dll. Our problem is that when building ExtLib, the linker complains about a multitude of unresolved externals, for example all functions exported from Driver.dll, since its lib file is not specified when linking. If we try to fix this by adding all lib files used by other projects that use all of the core project, our resulting ResultLib DLL ends up importing Driver.dll and also exporting all functions defined in it. How do we tell Visual Studio to only try to resolve symbols that are actually used?

    Read the article

  • How to understand existing projects

    - by John
    Hi. I am a trainee developer and have been writing .NET applications for about a year now. Most of the work I have done has involved building new applications (mainly web apps) from scratch and I have been given more or less full control over the software design. This has been a great experience however, as a trainee developer my confidence about whether the approaches I have taken are the best is minimal. Ideally I would love to collaborate with more experienced developers (I find this the best was I learn) however in the company I work for developers tend to work in isolation (a great shame for me). Recently I decided that a good way to learn more about how experienced developers approach their design might be to explore some open source projects. I found myself a little overwhelmed by the projects I looked at. With my level of experience it was hard to understand the body of code I faced. My question is slight fuzzy one. How do developers approach the task of understanding a new medium to large scale project. I found myself pouring over lots of code and struggling to see the wood for the trees. At any one time I felt that I could understand a small portion of the system but not see how its all fits together. Do others get this same feeling? If so what approaches do you take to understanding the project? Do you have any other advice about how to learn design best practices? Any advice will be very much appreciated. Thank you.

    Read the article

  • What's the "correct way" to organize this project?

    - by user571747
    I'm working on a project that allows multiple users to submit large data files and perform operations on them. The "backend" which performs these operations is written in Perl while the "frontend" uses PHP to load HTML template files and determines which content to deliver. Data is stored in a database (MySQL, SQLite, Oracle) and while there is data which has not yet been acted upon, Perl adds it to a running queue which delivers data to other threads based on system load. In addition, there may be pre- and post-processing of the data before and after the main Perl script operates (the specifications are unclear) so I may want to allow these processors to be user-selectable plugins. I had been writing this project in a more procedural fashion but I am quickly realizing the benefit of separating concerns as to limit the scope one change has on the rest of the project. I'm quite unexperienced with design patterns and am curious what the best way to proceed is. I've heard MVC thrown around quite a bit but I am unsure of how to apply it. Specifically, what are some good options to structure this code (in terms of design patterns and folder hierarchy)? How can I achieve this with both PHP and Perl while minimizing duplicated code between languages? Should I keep my PHP files in the top level so I don't have ugly paths in the URL? Also, if I want to provide interchangeable databases, does each table need its own DAO implementation?

    Read the article

  • Are these saml request-response good enough?

    - by Ashwin
    I have set up a single sign on(SSO) for my services. All the services confirm the identity of the user using the IDPorvider(IDP). In my case I am also the IDP. In my saml request, I have included the following: 1. the level for which auth. is required. 2. the consumer url 3. the destination service url. 4. Issuer Then, encrypting this message with the SP's(service provider) private key and then with the IDP's Public key. Then I am sending this request. The IDP on receiving the request, first decrypts with his own private key and then with SP's public key. In the saml response: 1. destination url 2. Issuer 3. Status of the response Is this good enough? Please give your suggestions?

    Read the article

  • Importing package as a submodule

    - by wecac
    Hi, I have a package 3rd party open source package "foo"; that is in beta phase and I want to tweak it to my requirements. So I don't want to get it installed in /usr/local/lib/python or anywhere in current sys.path as I can't make frequent changes in top level packages. foo/ __init__.py fmod1.py import foo.mod2 fmod2.py pass I want to install the the package "foo" as a sub package of my namespace say "team.my_pkg". So that the "fullname" of the package becomes "team.my_pkg.foo" without changing the code in inner modules that refer "team.my_pkg.foo" as "foo". team/ __init__.py my_pkg/ __init__.py foo/ fmod1.py import foo.mod2 fmod2.py pass One way to do this is to do this in team.my_pkg.init.py: import os.path import sys sys.path.append(os.path.dirname(__file__)) But I think it is very unsafe. I hope there is some way that only fmod1.py and fmod2.py can call "foo" by its short name everything else should use its complete name "team.my_pkg.foo" I mean this should fail outside team/my_pkg/foo: import team.my_pkg import foo But this should succeed outside team/my_pkg/foo: import team.my_pkg.foo

    Read the article

  • Re-authentication required for registered-path links (to ASP.NET site) coming to IE from PowerPoint

    - by Daniel Halsey
    We're using URL routing based on Phil Haack's example, with config modifications based on MSDN Library article #CC668202, to provide "shareable" links for a ASP.NET forms site, and have run into a strange issue: For users attempting to open links from PowerPoint presentations, and who have IE set as their default browser, using one of these links forces (forms-based) re-authentication, even in the same browser instance with a live session. Info: We know the session is still alive. (Page returns information for the currently logged-in user; confirmed via debug watches) This doesn't happen with other browsers (FF, Chrome) or with other programs (Notepad++) as the URL source. We do not have a default path set, as this caused issues with root path handling at initial login. This primarily happens with PowerPoint, but will also happen in Word and OCS. On some machines, even after changing the default browser, Office apps will continue to use IE for these links, forcing this error. (A potential registry fix for this failed, but even if it had worked, we can't control default browser choice for our users.) We can't figure out if this is an Office oddity or is being caused by our decision to use app-level URL routing (rather than IIS rewriting). Has anyone else encountered this and found a solution?

    Read the article

  • How to limit TCP writes to particular size and then block untlil the data is read

    - by ustulation
    {Qt 4.7.0 , VS 2010} I have a Server written in Qt and a 3rd party client executable. Qt based server uses QTcpServer and QTcpSocket facilities (non-blocking). Going through the articles on TCP I understand the following: the original implementation of TCP mentioned the negotiable window size to be a 16-bit value, thus maximum being 65535 bytes. But implementations often used the RFC window-scale-extension that allows the sliding window size to be scalable by bit-shifting to yield a maximum of 1 gigabyte. This is implementation defined. This could have resulted in majorly different window sizes on receiver and sender end as the server uses Qt facilities without hardcoding any window size limit. Client 1st asks for all information it can based on the previous messages from the server before handling the new (accumulating) incoming messages. So at some point Server receives a lot of messages each asking for data of several MB's. This the server processes and puts it into the sender buffer. Client however is unable to handle the messages at the same pace and it seems that client’s receiver buffer is far smaller (65535 bytes maybe) than sender’s transmit window size. The messages thus get accumulated at sender’s end until the sender’s buffer is full too after which the TCP writes on sender would block. This however does not happen as sender buffer is much larger. Hence this manifests as increase in memory consumption on the sender’s end. To prevent this from happening, I used Qt’s socket’s waitForBytesWritten() with timeout set to -1 for infinite waiting period. This as I see from the behaviour blocks the thread writing TCP data until the data has actually been sensed by the receiver’s window (which will happen when earlier messages have been processed by the client at application level). This has caused memory consumption at Server end to be almost negligible. is there a better alternative to this (in Qt) if i want to restrict the memory consumption at server end to say x MB's? Also please point out if any of my understandings is incorrect.

    Read the article

  • Rails nested attributes with a join model, where one of the models being joined is a new record

    - by gzuki
    I'm trying to build a grid, in rails, for entering data. It has rows and columns, and rows and columns are joined by cells. In my view, I need for the grid to be able to handle having 'new' rows and columns on the edge, so that if you type in them and then submit, they are automatically generated, and their shared cells are connected to them correctly. I want to be able to do this without JS. Rails nested attributes fail to handle being mapped to both a new record and a new column, they can only do one or the other. The reason is that they are a nested specifically in one of the two models, and whichever one they aren't nested in will have no id (since it doesn't exist yet), and when pushed through accepts_nested_attributes_for on the top level Grid model, they will only be bound to the new object created for whatever they were nested in. How can I handle this? Do I have to override rails handling of nested attributes? My models look like this, btw: class Grid < ActiveRecord::Base has_many :rows has_many :columns has_many :cells, :through => :rows accepts_nested_attributes_for :rows, :allow_destroy => true, :reject_if => lambda {|a| a[:description].blank? } accepts_nested_attributes_for :columns, :allow_destroy => true, :reject_if => lambda {|a| a[:description].blank? } end class Column < ActiveRecord::Base belongs_to :grid has_many :cells, :dependent => :destroy has_many :rows, :through => :grid end class Row < ActiveRecord::Base belongs_to :grid has_many :cells, :dependent => :destroy has_many :columns, :through => :grid accepts_nested_attributes_for :cells end class Cell < ActiveRecord::Base belongs_to :row belongs_to :column has_one :grid, :through => :row end

    Read the article

  • How to perform a Depth First Search iteratively using async/parallel processing?

    - by Prabhu
    Here is a method that does a DFS search and returns a list of all items given a top level item id. How could I modify this to take advantage of parallel processing? Currently, the call to get the sub items is made one by one for each item in the stack. It would be nice if I could get the sub items for multiple items in the stack at the same time, and populate my return list faster. How could I do this (either using async/await or TPL, or anything else) in a thread safe manner? private async Task<IList<Item>> GetItemsAsync(string topItemId) { var items = new List<Item>(); var topItem = await GetItemAsync(topItemId); Stack<Item> stack = new Stack<Item>(); stack.Push(topItem); while (stack.Count > 0) { var item = stack.Pop(); items.Add(item); var subItems = await GetSubItemsAsync(item.SubId); foreach (var subItem in subItems) { stack.Push(subItem); } } return items; } EDIT: I was thinking of something along these lines, but it's not coming together: var tasks = stack.Select(async item => { items.Add(item); var subItems = await GetSubItemsAsync(item.SubId); foreach (var subItem in subItems) { stack.Push(subItem); } }).ToList(); if (tasks.Any()) await Task.WhenAll(tasks); UPDATE: If I wanted to chunk the tasks, would something like this work? foreach (var batch in items.BatchesOf(100)) { var tasks = batch.Select(async item => { await DoSomething(item); }).ToList(); if (tasks.Any()) { await Task.WhenAll(tasks); } } The language I'm using is C#.

    Read the article

  • Deterministic floating point and .NET

    - by code2code
    How can I guarantee that floating point calculations in a .NET application (say in C#) always produce the same bit-exact result? Especially when using different versions of .NET and running on different platforms (x86 vs x86_64). Inaccuracies of floating point operations do not matter. In Java I'd use strictfp. In C/C++ and other low level languages this problem is essentially solved by accessing the FPU / SSE control registers but that's probably not possible in .NET. Even with control of the FPU control register the JIT of .NET will generate different code on different platforms. Something like HotSpot would be even worse in this case... Why do I need it? I'm thinking about writing a real-time strategy (RTS) game which heavily depends on fast floating point math together with a lock stepped simulation. Essentially I will only transmit user input across the network. This also applies to other games which implement replays by storing the user input. Not an option are: decimals (too slow) fixed point values (too slow and cumbersome when using sqrt, sin, cos, tan, atan...) update state across the network like an FPS: Sending position information for hundreds or a few thousand units is not an option Any ideas?

    Read the article

  • Localisable Resources: how can (should one?!) wrap a UI layer source as a BL layer service?

    - by Ciel
    A service that returns localised strings could be wrapped in a service, so that it could be used both locally (eg in an MVC app) and remotely (eg possibly Silverlight). But...if sticking with the standard practice of creating resources in the UI assembly, that would in effect make a lower layer (BL/Services) have to have a ref on a higher layer (UI)...a definite no-no. And whereas a lot of AppWide resources (eg: AppName, OK, Cancel, etc.) could be defined in a Common cross-cutting assembly, and the BL/ResourceSerouce could ref and wrap those, that doesn't work in a a Modular App, where the Core app should have no binding to/knowledge of any Module. One solution could be to have each module, once mounted in mem, 'register' their Resource files with the service, who would then return it to the service (rather a long round trip, but at least consistent as a service, and potentially resources/images could be shared with other resources). Secondly, that may work in a web app...but not sure how that pattern could be extended to a Silverlight modular app (the round tripping becomes prohibitive). ie...what are best practices for allowing Resources to be to be defined by the UI designer, in a higher level, but served from the lower BL layer, as a Service? Or is there a better way of understanding/solving the problem?

    Read the article

  • Etiquette: Version bump my fork of opensource project?

    - by Ross
    This question is about etiquette and open source projects. I have forked an application from github and added two new features. The first feature has been request frequently elsewhere. I have added it. Code & implementation are clean (I think). The second feature is more of a hack. It will be of use to others, but the implementation is a little dirty in useage and more so in code. I need the feature but I don't have the skills to fully implement it properly or to a level that could be considered a worth while contrabution to the main project. How should the versioning work? Do I just bump up my version numbers care-free and push to my master branch? It is annoying to know which version is running, modifed or original, as both have the same version number. But will it be confusing when, months later, my github page has a version number the same as the original but both are actually completely different. (I have made pull requests etc. but that is not the context of my question.) The project I have forked uses ruby jeweler so has a versioning format of: Jeweler tracks the version of your project. It assumes you will be using a version in the format x.y.z. x is the 'major' version, y is the 'minor' version, and z is the patch version. Is this standard for other projects/langauges too? Are my changes patches? Thanks

    Read the article

  • Can a Snapshot transaction fail and only partially commit in a TransactionScope?

    - by Travis Brooks
    Greetings I stumbled onto a problem today that seems sort of impossible to me, but its happening...I'm calling some database code in c# that looks something like this: using(var tran = MyDataLayer.Transaction()) { MyDataLayer.ExecSproc(new SprocTheFirst(arg1, arg2)); MyDataLayer.CallSomethingThatEventuallyDoesLinqToSql(arg1, argEtc); tran.Commit(); } I've simplified this a bit for posting, but whats going on is MyDataLayer.Transaction() makes a TransactionScope with the IsolationLevel set to Snapshot and TransactionScopeOption set to Required. This code gets called hundreds of times a day, and almost always works perfectly. However after reviewing some data I discovered there are a handful of records created by "SprocTheFirst" but no corresponding data from "CallSomethingThatEventuallyDoesLinqToSql". The only way that records should exist in the tables I'm looking at is from SprocTheFirst, and its only ever called in this one function, so if its called and succeeded then I would expect CallSomethingThatEventuallyDoesLinqToSql would get called and succeed because its all in the same TransactionScope. Its theoretically possible that some other dev mucked around in the DB, but I don't think they have. We also log all exceptions, and I can find nothing unusual happening around the time that the records from SprocTheFirst were created. So, is it possible that a transaction, or more properly a declarative TransactionScope, with Snapshot isolation level can fail somehow and only partially commit?

    Read the article

  • How string accepting interface should look like?

    - by ybungalobill
    Hello, This is a follow up of this question. Suppose I write a C++ interface that accepts or returns a const string. I can use a const char* zero-terminated string: void f(const char* str); // (1) The other way would be to use an std::string: void f(const string& str); // (2) It's also possible to write an overload and accept both: void f(const char* str); // (3) void f(const string& str); Or even a template in conjunction with boost string algorithms: template<class Range> void f(const Range& str); // (4) My thoughts are: (1) is not C++ish and may be less efficient when subsequent operations may need to know the string length. (2) is bad because now f("long very long C string"); invokes a construction of std::string which involves a heap allocation. If f uses that string just to pass it to some low-level interface that expects a C-string (like fopen) then it is just a waste of resources. (3) causes code duplication. Although one f can call the other depending on what is the most efficient implementation. However we can't overload based on return type, like in case of std::exception::what() that returns a const char*. (4) doesn't work with separate compilation and may cause even larger code bloat. Choosing between (1) and (2) based on what's needed by the implementation is, well, leaking an implementation detail to the interface. The question is: what is the preffered way? Is there any single guideline I can follow? What's your experience?

    Read the article

  • How do I "rebind" the click event after unbind('click') ?

    - by Ben
    I have an anchor tag <a class="next">next</a> made into a "button". Sometimes, this tag needs to be hidden if there is nothing new to show. All works fine if I simply hide the button with .hide() and re-display it with .show(). But I wanted to uses .fadeIn() and .fadeOut() instead. The problem I'm having is that if the user clicks on the button during the fadeOut animation, it can cause problems with the logic I have running the show. The solution I found was to unbind the click event from the button after the original click function begins, and then re-bind it after the animation is complete. $('a.next').click(function() { $(this).unbind('click'); ... // calls some functions, one of which fades out the a.next if needed ... $(this).bind('click'); } the last part of the above example does not work. The click event is not actually re-bound to the anchor. does anyone know the correct way to accomplish this? I'm a self-taught jquery guy, so some of the higher level things like unbind() and bind() are over my head, and the jquery documentation isn't really simple enough for me to understand.

    Read the article

  • How do I compare two PropertyInfos or methods reliably?

    - by Rob Ashton
    Same for methods too: I am given two instances of PropertyInfo or methods which have been extracted from the class they sit on via GetProperty or GetMember etc, (or from a MemberExpression maybe). I want to determine if they are in fact referring to the same Property or the same Method so (propertyOne == propertyTwo) or (methodOne == methodTwo) Clearly that isn't going to actually work, you might be looking at the same property, but it might have been extracted from different levels of the class hierarchy (in which case generally, propertyOne != propertyTwo) Of course, I could look at DeclaringType, and re-request the property, but this starts getting a bit confusing when you start thinking about Properties/Methods declared on interfaces and implemented on classes Properties/Methods declared on a base class (virtually) and overridden on derived classes Properties/Methods declared on a base class, overridden with 'new' (in IL world this is nothing special iirc) At the end of the day, I just want to be able to do an intelligent equality check between two properties or two methods, I'm 80% sure that the above bullet points don't cover all of the edge cases, and while I could just sit down, write a bunch of tests and start playing about, I'm well aware that my low level knowledge of how these concepts are actually implemented is not excellent, and I'm hoping this is an already answered topic and I just suck at searching. The best answer would give me a couple of methods that achieve the above, explaining what edge cases have been taken care of and why :-)

    Read the article

< Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >