Search Results

Search found 35892 results on 1436 pages for 'a different ben'.

Page 412/1436 | < Previous Page | 408 409 410 411 412 413 414 415 416 417 418 419  | Next Page >

  • What do you need to know to be a world-class master software developer? [closed]

    - by glitch
    I wanted to bring up this question to you folks and see what you think, hopefully advise me on the matter: let's say you had 30 years of learning and practicing software development in front of you, how would you dedicate your time so that you'd get the biggest bang for your buck. What would you both learn and work on to be a world-class software developer that would make a large impact on the industry and leave behind a legacy? I think that most great developers end up being both broad generalists and specialists in one-two areas of interest. I'm thinking Bill Joy, John Carmack, Linus Torvalds, K&R and so on. I'm thinking that perhaps one approach would be to break things down by categories and establish a base minimum of "software development" greatness. I'm thinking: Operating Systems: completely internalize the core concepts of OS, perhaps gain a lot of familiarity with an OSS one such as Linux. Anything from memory management to device drivers has to be complete second nature. Programming Languages: this is one of those topics that imho has to be fully grokked even if it might take many years. I don't think there's quite anything like going through the process of developing your own compiler, understanding language design trade-offs and so on. Programming Language Pragmatics is one of my favorite books actually, I think you want to have that internalized back to back, and that's just the start. You could go significantly deeper, but I think it's time well spent, because it's such a crucial building block. As a subset of that, you want to really understand the different programming paradigms out there. Imperative, declarative, logic, functional and so on. Anything from assembly to LISP should be at the very least comfortable to write in. Contexts: I believe one should have experience working in different contexts to truly be able to appreciate the trade-offs that are being made every day. Embedded, web development, mobile development, UX development, distributed, cloud computing and so on. Hardware: I'm somewhat conflicted about this one. I think you want some understanding of computer architecture at a low level, but I feel like the concepts that will truly matter will be slightly higher level, such as CPU caching / memory hierarchy, ILP, and so on. Networking: we live in a completely network-dependent era. Having a good understanding of the OSI model, knowing how the Web works, how HTTP works and so on is pretty much a pre-requisite these days. Distributed systems: once again, everything's distributed these days, it's getting progressively harder to ignore this reality. Slightly related, perhaps add solid understanding of how browsers work to that, since the world seems to be moving so much to interfacing with everything through a browser. Tools: Have a really broad toolset that you're familiar with, one that continuously expands throughout the years. Communication: I think being a great writer, effective communicator and a phenomenal team player is pretty much a prerequisite for a lot of a software developer's greatness. It can't be overstated. Software engineering: understanding the process of building software, team dynamics, the requirements of the business-side, all the pitfalls. You want to deeply understand where what you're writing fits from the market perspective. The better you understand all of this, the more of your work will actually see the daylight. This is really just a starting list, I'm confident that there's a ton of other material that you need to master. As I mentioned, you most likely end up specializing in a bunch of these areas as you go along, but I was trying to come up with a baseline. Any thoughts, suggestions and words of wisdom from the grizzled veterans out there who would like to share their thoughts and experiences with this? I'd really love to know what you think!

    Read the article

  • Entity System with C++ templates

    - by tommaisey
    I've been getting interested in the Entity/Component style of game programming, and I've come up with a design in C++ which I'd like a critique of. I decided to go with a fairly pure Entity system, where entities are simply an ID number. Components are stored in a series of vectors - one for each Component type. However, I didn't want to have to add boilerplate code for every new Component type I added to the game. Nor did I want to use macros to do this, which frankly scare me. So I've come up with a system based on templates and type hinting. But there are some potential issues I'd like to check before I spend ages writing this (I'm a slow coder!) All Components derive from a Component base class. This base class has a protected constructor, that takes a string parameter. When you write a new derived Component class, you must initialise the base with the name of your new class in a string. When you first instantiate a new DerivedComponent, it adds the string to a static hashmap inside Component mapped to a unique integer id. When you subsequently instantiate more Components of the same type, no action is taken. The result (I think) should be a static hashmap with the name of each class derived from Component that you instantiate at least once, mapped to a unique id, which can by obtained with the static method Component::getTypeId ("DerivedComponent"). Phew. The next important part is TypedComponentList<typename PropertyType>. This is basically just a wrapper to an std::vector<typename PropertyType> with some useful methods. It also contains a hashmap of entity ID numbers to slots in the array so we can find Components by their entity owner. Crucially TypedComponentList<> is derived from the non-template class ComponentList. This allows me to maintain a list of pointers to ComponentList in my main ComponentManager, which actually point to TypedComponentLists with different template parameters (sneaky). The Component manager has template functions such as: template <typename ComponentType> void addProperty (ComponentType& component, int componentTypeId, int entityId) and: template <typename ComponentType> TypedComponentList<ComponentType>* getComponentList (int componentTypeId) which deal with casting from ComponentList to the correct TypedComponentList for you. So to get a list of a particular type of Component you call: TypedComponentList<MyComponent>* list = componentManager.getComponentList<MyComponent> (Component::getTypeId("MyComponent")); Which I'll admit looks pretty ugly. Bad points of the design: If a user of the code writes a new Component class but supplies the wrong string to the base constructor, the whole system will fail. Each time a new Component is instantiated, we must check a hashed string to see if that component type has bee instantiated before. Will probably generate a lot of assembly because of the extensive use of templates. I don't know how well the compiler will be able to minimise this. You could consider the whole system a bit complex - perhaps premature optimisation? But I want to use this code again and again, so I want it to be performant. Good points of the design: Components are stored in typed vectors but they can also be found by using their entity owner id as a hash. This means we can iterate them fast, and minimise cache misses, but also skip straight to the component we need if necessary. We can freely add Components of different types to the system without having to add and manage new Component vectors by hand. What do you think? Do the good points outweigh the bad?

    Read the article

  • Asus X202e VivoBook, dual boot. How to get around UEFI and have Win8 & Ubuntu?

    - by Nukeface
    I've gotten my hands on an Asus Vivobook X202e. I like it, handy to use, small, etc etc. Oh, it's the i3 core version. For school I still need Windows * sigh * for the .NET development. (I know, possible in Ubuntu, this n that, but for ease atm wanting to keep it with Win8). So. How to install both on this little thing? I've found a way into the BIOS (before splash screen, mash F2. Works only after reboot, not cold boot). But the whole boot loading setup is different than from what I know, and I must've messed up something because it's been "Attempting Repairs", "Analyzing hard disk", and a bunch of other things for the past 15 minutes. (All I've done is selected "disabled" on secure boot, picky as ** Microsoft). Keeping the original Windows installation is of no concern. Found the product key already and have a clean install waiting. BTW, not trying to leech knowledge, even though first question and no answers. I'm more and more active on Stackoverflow. But, especially due to secure boot and windows 8, I'm going over to Ubuntu. Well, more and more anyway, I like my Windows based games as well ;) UPDATE Managed to do a clean install of Windows 8 Pro. After disabling Secure Boot, also had to disable fast boot, and enable Launch CSM, leaving the option which appeared (Launch PXE OpROM) disabled. Then I rebooted, with the USB Boot drive I created using the Windows 7 USB DVD Download Tool (scroll down for download link), provided by Microsoft. During the installation, I chose to install a clean version, therefor deleted the partitions containing current windows files. I left the Recovery partition (you never know...). Of course, the new Windows Installation dit not like this. Apparantly Windows cannot be installed on a GPT hard disk. Remember I hadn't changed the partition table, was still factory default! Minus a few partitions, granted. So deleted ALL partittions, did a format of the disk, created a new partition. Et voila, Windows installation started. FINALLY! WONDROUS After the installation, Windows still had background images located in C:/Users/ ME /AppData/Local/Microsoft/Themes/RoamedThemeFiles/DesktopBackground/ that I had in the previous installation. Before doing: format, delete partition, cascade partitions, create new partition of different size, format partition, install Windows. It managed to keep the images through all that. Anyone got an idea on that one? It also remembered the settings for the Windows Aero theme... UPDATED QUESTION: After all this you'd think I'd have the rest figured out. Wrong. Ubuntu 12.10, 64 bit installation can't read the partitioning of the hdd during the installation. Any ideas on how to fix this so the install for a dual-boot system can proceed? (Preferably without starting anew with Windows as well ;) )

    Read the article

  • The long road to bug-free software

    - by Tony Davis
    The past decade has seen a burgeoning interest in functional programming languages such as Haskell or, in the Microsoft world, F#. Though still on the periphery of mainstream programming, functional programming concepts are gradually seeping into the imperative C# language (for example, Lambda expressions have their root in functional programming). One of the more interesting concepts from functional programming languages is the use of formal methods, the lofty ideal behind which is bug-free software. The idea is that we write a specification that describes exactly how our function (say) should behave. We then prove that our function conforms to it, and in doing so have proved beyond any doubt that it is free from bugs. All programmers already use one form of specification, specifically their programming language's type system. If a value has a specific type then, in a type-safe language, the compiler guarantees that value cannot be an instance of a different type. Many extensions to existing type systems, such as generics in Java and .NET, extend the range of programs that can be type-checked. Unfortunately, type systems can only prevent some bugs. To take a classic problem of retrieving an index value from an array, since the type system doesn't specify the length of the array, the compiler has no way of knowing that a request for the "value of index 4" from an array of only two elements is "unsafe". We restore safety via exception handling, but the ideal type system will prevent us from doing anything that is unsafe in the first place and this is where we start to borrow ideas from a language such as Haskell, with its concept of "dependent types". If the type of an array includes its length, we can ensure that any index accesses into the array are valid. The problem is that we now need to carry around the length of arrays and the values of indices throughout our code so that it can be type-checked. In general, writing the specification to prove a positive property, even for a problem very amenable to specification, such as a simple sorting algorithm, turns out to be very hard and the specification will be different for every program. Extend this to writing a specification for, say, Microsoft Word and we can see that the specification would end up being no simpler, and therefore no less buggy, than the implementation. Fortunately, it is easier to write a specification that proves that a program doesn't have certain, specific and undesirable properties, such as infinite loops or accesses to the wrong bit of memory. If we can write the specifications to prove that a program is immune to such problems, we could reuse them in many places. The problem is the lack of specification "provers" that can do this without a lot of manual intervention (i.e. hints from the programmer). All this might feel a very long way off, but computing power and our understanding of the theory of "provers" advances quickly, and Microsoft is doing some of it already. Via their Terminator research project they have started to prove that their device drivers will always terminate, and in so doing have suddenly eliminated a vast range of possible bugs. This is a huge step forward from saying, "we've tested it lots and it seems fine". What do you think? What might be good targets for specification and verification? SQL could be one: the cost of a bug in SQL Server is quite high given how many important systems rely on it, so there's a good incentive to eliminate bugs, even at high initial cost. [Many thanks to Mike Williamson for guidance and useful conversations during the writing of this piece] Cheers, Tony.

    Read the article

  • Webcast On-Demand: Building Java EE Apps That Scale

    - by jeckels
    With some awesome work by one of our architects, Randy Stafford, we recently completed a webcast on scaling Java EE apps efficiently. Did you miss it? No problem. We have a replay available on-demand for you. Just hit the '+' sign drop-down for access.Topics include: Domain object caching Service response caching Session state caching JSR-107 HotCache and more! Further, we had several interesting questions asked by our audience, and we thought we'd share a sampling of those here for you - just in case you had the same queries yourself. Enjoy! What is the largest Coherence deployment out there? We have seen deployments with over 500 JVMs in the Coherence cluster, and deployments with over 1000 JVMs using the Coherence jar file, in one system. On the management side there is an ecosystem of monitoring tools from Oracle and third parties with dashboards graphing values from Coherence's JMX instrumentation. For lifecycle management we have seen a lot of custom scripting over the years, but we've also integrated closely with WebLogic to leverage its management ecosystem for deploying Coherence-based applications and managing process life cycles. That integration introduces a new Java EE archive type, the Grid Archive or GAR, which embeds in an EAR and can be seen by a WAR in WebLogic. That integration also doesn't require any extra WebLogic licensing if Coherence is licensed. How is Coherence different from a NoSQL Database like MongoDB? Coherence can be considered a NoSQL technology. It pre-dates the NoSQL movement, having been first released in 2001 whereas the term "NoSQL" was coined in 2009. Coherence has a key-value data model primarily but can also be used for document data models. Coherence manages data in memory currently, though disk persistence is in a future release currently in beta testing. Where the data is managed yields a few differences from the most well-known NoSQL products: access latency is faster with Coherence, though well-known NoSQL databases can manage more data. Coherence also has features that well-known NoSQL database lack, such as grid computing, eventing, and data source integration. Finally Coherence has had 15 years of maturation and hardening from usage in mission-critical systems across a variety of industries, particularly financial services. Can I use Coherence for local caching? Yes, you get additional features beyond just a java.util.Map: you get expiration capabilities, size-limitation capabilities, eventing capabilites, etc. Are there APIs available for GoldenGate HotCache? It's mostly a black box. You configure it, and it just puts objects into your caches. However you can treat it as a glass box, and use Coherence event interceptors to enhance its behavior - and there are use cases for that. Are Coherence caches updated transactionally? Coherence provides several mechanisms for concurrency control. If a project insists on full-blown JTA / XA distributed transactions, Coherence caches can participate as resources. But nobody does that because it's a performance and scalability anti-pattern. At finer granularity, Coherence guarantees strict ordering of all operations (reads and writes) against a single cache key if the operations are done using Coherence's "EntryProcessor" feature. And Coherence has a unique feature called "partition-level transactions" which guarantees atomic writes of multiple cache entries (even in different caches) without requiring JTA / XA distributed transaction semantics.

    Read the article

  • Since upgrading to Solaris 11, my ARC size has consistently targeted 119MB, despite having 30GB RAM. What? Why?

    - by growse
    I ran a NAS/SAN box on Solaris 11 Express before Solaris 11 was released. The box is an HP X1600 with an attached D2700. In all, 12x 1TB 7200 SATA disks, 12x 300GB 10k SAS disks in separate zpools. Total RAM is 30GB. Services provided are CIFS, NFS and iSCSI. All was well, and I had a ZFS memory usage graph looking like this: A fairly healthy Arc size of around 23GB - making use of the available memory for caching. However, I then upgraded to Solaris 11 when that came out. Now, my graph looks like this: Partial output of arc_summary.pl is: System Memory: Physical RAM: 30701 MB Free Memory : 26719 MB LotsFree: 479 MB ZFS Tunables (/etc/system): ARC Size: Current Size: 915 MB (arcsize) Target Size (Adaptive): 119 MB (c) Min Size (Hard Limit): 64 MB (zfs_arc_min) Max Size (Hard Limit): 29677 MB (zfs_arc_max) It's targetting 119MB while sitting at 915MB. It's got 30GB to play with. Why? Did they change something? Edit To clarify, arc_summary.pl is Ben Rockwood's, and the relevent lines generating the above stats are: my $mru_size = ${Kstat}->{zfs}->{0}->{arcstats}->{p}; my $target_size = ${Kstat}->{zfs}->{0}->{arcstats}->{c}; my $arc_min_size = ${Kstat}->{zfs}->{0}->{arcstats}->{c_min}; my $arc_max_size = ${Kstat}->{zfs}->{0}->{arcstats}->{c_max}; my $arc_size = ${Kstat}->{zfs}->{0}->{arcstats}->{size}; The Kstat entries are there, I'm just getting odd values out of them. Edit 2 I've just re-measured the arc size with arc_summary.pl - I've verified these numbers with kstat: System Memory: Physical RAM: 30701 MB Free Memory : 26697 MB LotsFree: 479 MB ZFS Tunables (/etc/system): ARC Size: Current Size: 744 MB (arcsize) Target Size (Adaptive): 119 MB (c) Min Size (Hard Limit): 64 MB (zfs_arc_min) Max Size (Hard Limit): 29677 MB (zfs_arc_max) The thing that strikes me is that the Target Size is 119MB. Looking at the graph, it's targeted the exact same value (124.91M according to cacti, 119M according to arc_summary.pl - think the difference is just 1024/1000 rounding issues) ever since Solaris 11 was installed. It looks like the kernel's making zero effort to shift the target size to anything different. The current size is fluctuating as the needs of the system (large) fight with the target size, and it appears equilibrium is between 700 and 1000MB. So the question is now a little more pointed - why is Solaris 11 hard setting my ARC target size to 119MB, and how do I change it? Should I raise the min size to see what happens? I've stuck the output of kstat -n arcstats over at http://pastebin.com/WHPimhfg Edit 3 Ok, weirdness now. I know flibflob mentioned that there was a patch to fix this. I haven't applied this patch yet (still sorting out internal support issues) and I've not applied any other software updates. Last thursday, the box crashed. As in, completely stopped responding to everything. When I rebooted it, it came back up fine, but here's what my graph now looks like. It seems to have fixed the problem. This is proper la la land stuff now. I've literally no idea what's going on. :(

    Read the article

  • Excel 2007 - "The macro may not be available in this workbook" Error

    - by Psycho Bob
    We use an Excel sheet that has been protected to prevent modification of it from end users. All in all they are only able to edit certain tabs to add information that will then be used to generate information on other tabs using equations and such. On the tab with the equations, a button is present called "Prep for Internal Hard Copy Print." This button runs a macro that selects the information on the tab, unprotects it, then sends a print job to the user's default printer that contains the unprotected content. Normally this works like a champ. This time around, however, the macro is throwing the following error: Cannot run the macro "FILENAME.xlsx'!MacroName'. The macro may not be available in this workbook or all macros may be disabled. As far as I can tell, the macros are still present within the workbook. This sheet is normally a .xlsm though the user saved it with a different filename as a .xlsx. Also, the macros appear only as MacroName in the .xlsm file and not "FILENAME.xlsx'!MacroName' as it does in the .xlsx. Finally, when I open the .xlsm it asks if I want to enable the macro content while the .xlsx does not prompt for this. Can anyone tell me what's going on with this sheet or know of a way that I can get the macros working in the .xlsx without having to start over with a different sheet?

    Read the article

  • HintPath vs ReferencePath in Visual Studio

    - by toasteroven
    What exactly is the difference between the HintPath in a .csproj file and the ReferencePath in a .csproj.user file? We're trying to commit to a convention where dependency DLLs are in a "releases" svn repo and all projects point to a particular release. Since different developers have different folder structures, relative references won't work, so we came up with a scheme to use an environment variable pointing to the particular developer's releases folder to create an absolute reference. So after a reference is added, we manually edit the project file to change the reference to an absolute path using the environment variable. I've noticed that this can be done with both the HintPath and the ReferencePath, but the only difference I could find between them is that HintPath is resolved at build-time and ReferencePath when the project is loaded into the IDE. I'm not really sure what the ramifications of that are though. I have noticed that VS sometimes rewrites the .csproj.user and I have to rewrite the ReferencePath, but I'm not sure what triggers that. I've heard that it's best not to check in the .csproj.user file since it's user-specific, so I'd like to aim for that, but I've also heard that the HintPath-specified DLL isn't "guaranteed" to be loaded if the same DLL is e.g. located in the project's output directory. Any thoughts on this?

    Read the article

  • How to parse amadeus air ticket file

    - by Andrus
    Amadeous produces AIR file like below for every flyight reservation. I need to read reservation number and source and destionation airports from this file. I searched goog for "amadeous air format" but havent found format description. Wikipedia entry about EDIFACt is a bit different, it does not describe this content. Where to fnd information about the file structure ? How to parse this file ? I have not idea about the file stucture, does it contain records like sql table or is it some reservation protocol instructions like postscript file ? Application should work in Microsoft Windows and preferably in Visual FoxPro or C# language. FoxPro or Microsoft Visual Studio 2012 Express can used as programming environment Google returns only Amadeus users guides and tutorials like in comment and in http://www.amadeusschweiz.com/en/documentation/usermanuals.html Those are user manuals. Most promising looks Amadeus Air user guide from this: File which I received name was air.txt and first token in file is AIR-BLK206 Maybe BLK206 is some booking format descriptor. Google returns some documens like my using this so it looks like it is commonly used. This file probably describes how to reserve ticket, which produces air.txt file. I seacrched this and ticket user guide for BLK but those do not contains this abbreviation. Commands in user manual look different than those from this file. How to use this information to extract reservation number and destination airport from this file ? I havent found format description using google. There are amadeus user guides, tutorials ja quick reference files similar which you posted but I do'nt understand how to use them to parse this file. One message describes that this is form of EDIFACT. However EDIFACT message sample in Wikipedia is also diffrerent. I need to create quick prototype to customer which shows that we vcan read those files. Maybe there are some programs which can used to display it in human readable form ?

    Read the article

  • ASP.Net: IHttpAsyncHandler and AsyncProcessorDelegate

    - by ctrlShiftBryan
    I have implemented an IHttpAsyncHandler. I am making about 5 different AJAX calls from a webpage that has widgets to that handler. One of those widgets takes about 15 seconds to load(because of a large database query) the others should all load in under a second. The handler is responding in a synchronous manner. I am getting very inconsistent results. The ProcessRequest method is using Session and other class level variables. Could that be causing different requests to use the same thread instead each there own? I'm getting this... Request1 --- response 1 sec Request2 --- response 14 sec Request3 --- response 14.5 sec Request4 --- response 15 sec Request5 --- response 15.5 sec but I'm looking for something more like this... Request1 --- response 1 sec Request2 --- response 14 sec Request3 --- response 1.5 sec Request4 --- response 2 sec Request5 --- response 1.5 sec Without posting too much code my implementation of the IHttpAsyncHandler methods are pretty standard. private AsyncProcessorDelegate _Delegate; protected delegate void AsyncProcessorDelegate(HttpContext context); IAsyncResult IHttpAsyncHandler.BeginProcessRequest(HttpContext context, AsyncCallback cb, object extraData) { _Delegate = new AsyncProcessorDelegate(ProcessRequest); return _Delegate.BeginInvoke(context, cb, extraData); } void IHttpAsyncHandler.EndProcessRequest(IAsyncResult result) { _Delegate.EndInvoke(result); } Putting a debug break point in my IHttpAsyncHandler.BeginProcessRequest method I can see that the method isn't being fired until the last Process is complete. Also my machine.config has this entry... processModel autoConfig="true" with no other properties set. What else do I need to check for?

    Read the article

  • winforms databinding best practices

    - by Kaiser Soze
    Demands / problems: I would like to bind multiple properties of an entity to controls in a form. Some of which are read only from time to time (according to business logic). When using an entity that implements INotifyPropertyChanged as the DataSource, every change notification refreshes all the controls bound to that data source (easy to verify - just bind two properties to two controls and invoke a change notification on one of them, you will see that both properties are hit and reevaluated). There should be user friendly error notifications (the entity implements IDataErrorInfo). (probably using ErrorProvider) Using the entity as the DataSource of the controls leads to performance issues and makes life harder when its time for a control to be read only. I thought of creating some kind of wrapper that holds the entity and a specific property so that each control would be bound to a different DataSource. Moreover, that wrapper could hold the ReadOnly indicator for that property so the control would be bound directly to that value. The wrapper could look like this: interface IPropertyWrapper : INotifyPropertyChanged, IDataErrorInfo { object Value { get; set; } bool IsReadOnly { get; } } But this means also a different ErrorProvider for each property (property wrapper) I feel like I'm trying to reinvent the wheel... What is the 'proper' way of handling complex binding demands like these? Thanks ahead.

    Read the article

  • Random DNS Client Issue with BIND9/Windows Server 2003 DNS

    - by upkels
    Within our office, we have a local server running DNS, for internal related "domains", (e.g. .internal, .office, .lan, .vpn, etc.). Randomly, only the hosts configured with those extensions will stop resolving on the Windows-based workstations. Sometimes it'll work for a couple weeks without issue on one machine, then suddenly stop working, or it'll happen on another 15 times per day. It's completely random for all workstations. When troubleshooting, I have opened up a command prompt, and issued various nslookup commands for some of these hosts, and they resolve, however I've been told that nslookup uses different "libraries" for name resolution than other applications such as web browsers, email clients, etc. The only solution thus far, is manually restarting the Windows DNS Client on each workstation when this happens. Issuing the ipconfig /flushdns command multiple times helps every now and then, but is not successful enough to even attempt before restarting the DNS Client. I have tried two different DNS servers; BIND9, and Windows Server 2003 R2 DNS, and the behavior is the same. We have a single Netgear JGS524 switch all workstations and servers are connected to within the office, and a Linksys SR224G switch in another department with workstations attached.

    Read the article

  • xml parsing using nsxmlParser in iphone

    - by filthynight
    hi all, I have a problem in parsing xml from google api, the api xml looks like this ....... Now the problem is that first of all, the tags are different, like first parent tag name is "abc" and second parent tag is "efg", further more the inner tags are different as well. i have modified code got from web that it parses another url, but in this case since the tags keeps on changing it does not work. the url = "http://www.google.com/ig/api?weather=anaheim,ca" Source Code (void)parser:(NSXMLParser *)parser didStartElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName attributes:(NSDictionary *)attributeDict{ //NSLog(@"found this element: %@", elementName); if (currentElement) { [currentElement release]; currentElement = nil; } currentElement = [elementName copy]; NSString *thisOwner = [attributeDict objectForKey:@"data"]; NSLog(@"element Name-------: %@", thisOwner); if ([elementName isEqualToString:@"forecast_information"]) //|| [elementName isEqualToString:@"current_conditions"] || [elementName isEqualToString:@"forecast_conditions"]) { // clear out our story item caches... //NSString *nm = [[attributeDict objectForKey:@"city"] stringValue]; //NSLog(@"value...%@",nm); item = [[NSMutableDictionary alloc] init]; currentTitle = [[NSMutableString alloc] init]; currentDate = [[NSMutableString alloc] init]; currentSummary = [[NSMutableString alloc] init]; currentLink = [[NSMutableString alloc] init]; [item setObject:currentLink forKey:@"postal_code"]; [item setObject:currentSummary forKey:@"current_date_time"]; [item setObject:currentDate forKey:@"unit_system"]; NSLog(@"adding story: %@", currentTitle); [stories addObject:[item copy]]; } } I have another function didEndElement where the values are assigned in the array, but i could not figure out how to assign values of an attribute. Can somebody please help me in this regards Thanks!

    Read the article

  • Using M2Crypto to save and load X509 certs in pem files

    - by Brock Pytlik
    I would expect that if I have a X509 cert as an object in memory, saved it as a pem file, then loaded it back in, I would end up with the same cert I started with. This seems not to be the case however. Let's call the original cert A, and the cert loaded from the pem file B. A.as_text() is identical to B.as_text(), but A.as_pem() differs from B.as_pem(). To say the least, I'm confused by this. As a side note, if A has been signed by another entity C, then A will verify against C's cert, but B will not. I've put together a tiny sample program to demonstrate what I'm seeing. When I run this, the second RuntimeError is raised. Thanks, Brock #!/usr/bin/python2.6 import M2Crypto as m2 import time cur_time = m2.ASN1.ASN1_UTCTIME() cur_time.set_time(int(time.time()) - 60*60*24) expire_time = m2.ASN1.ASN1_UTCTIME() # Expire certs in 1 hour. expire_time.set_time(int(time.time()) + 60 * 60 * 24) cs_rsa = m2.RSA.gen_key(1024, 65537, lambda: None) cs_pk = m2.EVP.PKey() cs_pk.assign_rsa(cs_rsa) cs_cert = m2.X509.X509() # These two seem the minimum necessary to make the as_text function call work # at all cs_cert.set_not_before(cur_time) cs_cert.set_not_after(expire_time) # This seems necessary to fill out the complete cert without errors. cs_cert.set_pubkey(cs_pk) # I've tried with the following set lines commented out and not commented. cs_name = m2.X509.X509_Name() cs_name.C = "US" cs_name.ST = "CA" cs_name.OU = "Fake Org CA 1" cs_name.CN = "www.fakeorg.dex" cs_name.Email = "[email protected]" cs_cert.set_subject(cs_name) cs_cert.set_issuer_name(cs_name) cs_cert.sign(cs_pk, md="sha256") orig_text = cs_cert.as_text() orig_pem = cs_cert.as_pem() print "orig_text:\n%s" % orig_text cs_cert.save_pem("/tmp/foo") tcs = m2.X509.load_cert("/tmp/foo") tcs_text = tcs.as_text() tcs_pem = tcs.as_pem() if orig_text != tcs_text: raise RuntimeError( "Texts were different.\nOrig:\n%s\nAfter load:\n%s" % (orig_text, tcs_text)) if orig_pem != tcs_pem: raise RuntimeError( "Pems were different.\nOrig:\n%s\nAfter load:\n%s" % (orig_pem, tcs_pem))

    Read the article

  • Web Service Exception Handling

    - by SchlaWiener
    I have a Winforms app that consumes a C# Webservice. If the WebService throws an Exception my Client app always get's a SoapException instead of the "real" Exception. Here's a demo: [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] [System.ComponentModel.ToolboxItem(false)] public class Service1 : System.Web.Services.WebService { [WebMethod] public string HelloWorld() { throw new IndexOutOfRangeException("just a demo exception"); } } Now, on the client side, I want to be able to handle different exceptions in a different way. try { ServiceReference1.Service1SoapClient client = new ServiceReference1.Service1SoapClient(); Button1.Text = client.HelloWorld(); } catch (IndexOutOfRangeException ex) { // I know how to handle IndexOutOfRangeException // but this block is never reached } catch (MyOwnException ex) { // I know how to handle MyOwnException // but this block is never reached } catch (System.ServiceModel.FaultException ex) { // I always end in this block } But that does not work because I always get a "System.ServiceModel.FaultException" and I can only figure out the "real" exception by parsing the Exception's message property: System.Web.Services.Protocols.SoapException: Server was unable to process request. ---> System.IndexOutOfRangeException: just a demo\n at SoapExceptionTest.Service1.Service1.HelloWorld() in ... --- End of inner exception stack trace --- Is there a way to make this work somehow?

    Read the article

  • Reset scale/width/zoom of Safari on iPhone using JavaScript/onorientationchange

    - by dwarbi
    I am displaying different content depending on how the user is holding his/her phone using the onorientationchange call in the body tag. This works great - I hide one div while making the other visible. The div in portrait mode looks great on first load. I use this to get the right scale/zoom: <meta name="viewport" content="width=device-width; initial-scale=1.0; maximum-scale=1.0;" /> Even if the content in portrait mode run over, the width is correct and the user can scroll down. The display in landscape mode is perfect too. However, if content in landscape mode requires the user the scroll down, then when the user returns to portrait mode, the screen is "zoomed out" so to speak. This happens whether or not the user scrolled down while in landscape mode. I've tried many different things to try to get the scale/zoom/width of the screen right, but no luck. Is there any way to do this? Thanks in advance!

    Read the article

  • Using two versions of the same assembly (system.web.mvc) at the same time

    - by Joel Abrahamsson
    I'm using a content management system whose admin interface uses MVC 1.0. I would like to build the public parts of the site using MVC 2. If I just reference System.Web.Mvc version 2 in my project the admin mode doesn't work as the reference to System.Web.Mvc.ViewPage created by the views in the admin interface is ambiguous: The type 'System.Web.Mvc.ViewPage' is ambiguous: it could come from assembly 'C:\Windows\assembly\GAC_MSIL\System.Web.Mvc\2.0.0.0__31bf3856ad364e35\System.Web.Mvc.dll' or from assembly 'C:\Windows\assembly\GAC_MSIL\System.Web.Mvc\1.0.0.0__31bf3856ad364e35\System.Web.Mvc.dll'. Please specify the assembly explicitly in the type name. I could easily work around this by using binding redirects to specify that MVC 2 should always be used. Unfortunately the content management systems admin mode isn't compatible with MVC 2. I'm not exactly sure why, but I start getting a bunch of null reference exceptions in some of it's actions when I try it and the developers of the CMS have confirmed that it isn't compatible with MVC 2 (yet). The admin interface which is accessed through domain.com/admin is not physically located in webroot/admin but in the program files folder on the server and domain.com/admin is instead routed there using a virtual path provider. Therefor, putting a separate web.config file in the admin folder to specify a different version of System.Web.Mvc for that part of the site isn't an option as that won't fly when using shared hosting. Can anyone see any solution to this problem? Perhaps it's possible to specify that for some assemblies a different version of a referenced assembly should be used?

    Read the article

  • Does Subversion have an analogue to VSS's links?

    - by bta
    I am migrating a Visual SourceSafe code repository to Subversion and I am running into a problem. Here is a simplified layout of our current source code tree (in VSS): project_root\ |-libs\ |-tools\ |-arch_1\ | |-include | |-source |-arch_2\ |-include |-source My problem is in our two arch_ folders. Each arch_ folder will be built for a different hardware architecture, but the contents of the two folders are practically identical. The files in arch_2 are merely VSS links to the files in arch_1, with only a small handful of exceptions. Work is generally checked into and out of the arch_1 folder, and the VSS links make sure that any code checked in here is updated in the arch_2 folder as well. Moving to Subversion, is there anything that will behave like VSS's links? That is, is there a way to have two files in separate folders magically associated with one another such that they will always be in sync with each other (changes to one will affect the other as well)? Note: I know the correct answer here is to fix the build system. The build system on this project was pieced together roughly a decade ago, back when our compiler/build system wasn't intelligent enough to compile the same folder full of source code for two different architectures. Thanks to make and updated compilers, we can re-write the build system to eliminate this dependency on two parallel source folders. However, this will take time that we don't have at the moment (we are losing our license to our VSS server and are being forced to migrate on rather short notice). I am hoping to find a Subversion solution to this problem because at the moment, our time would be much better spent making the migration run smoothly than re-writing the build system (which is next on my to-do list!). Thank you for your help!

    Read the article

  • NHibernate.MappingException (no persister for) weirdness

    - by Berryl
    The weird part being that I have other tests that validate the mapping and even the method being called (Nhib session.SaveOrUpdate) that run just fine. The entire exception is below. Here is some debug output from a test that does work: Item type: Domain.Model.Projects.Project item: 007-00-056 ATM Machine Replacement Is transient: True Id: 0 NHibernate: INSERT INTO Projects (Code, Description) VALUES (@p0, @p1); select insert_rowid();@p0 = '007-00-056', @p1 = 'ATM Machine Replacement' Here is the same debug output before the exception: Item type: Smack.ConstructionAdmin.Domain.Model.Projects.Project item: 006-00-023 Refinish Casino Chairs Is transient: True Id: 0 The two tests are different in that the one that works is just testing the repository, and saving in memory test data. The failing one is saving data that has been converted from a legacy db (which has it's own session). The repository is also a replacement design for a different IProjectRepsitory that worked fine doing this, so the new repository is also a likely suspect here. Does anyone see what I'm missing or have some questions to narrow it down? Cheers, Berryl === the Exception trace ===== failed: NHibernate.MappingException : No persister for: Domain.Model.Projects.Project at NHibernate.Impl.SessionFactoryImpl.GetEntityPersister(String entityName) at NHibernate.Impl.SessionImpl.GetEntityPersister(String entityName, Object obj) at NHibernate.Event.Default.AbstractSaveEventListener.SaveWithGeneratedId(Object entity, String entityName, Object anything, IEventSource source, Boolean requiresImmediateIdAccess) at NHibernate.Event.Default.DefaultSaveOrUpdateEventListener.SaveWithGeneratedOrRequestedId(SaveOrUpdateEvent event) at NHibernate.Event.Default.DefaultSaveEventListener.SaveWithGeneratedOrRequestedId(SaveOrUpdateEvent event) at NHibernate.Event.Default.DefaultSaveOrUpdateEventListener.EntityIsTransient(SaveOrUpdateEvent event) at NHibernate.Event.Default.DefaultSaveEventListener.PerformSaveOrUpdate(SaveOrUpdateEvent event) at NHibernate.Event.Default.DefaultSaveOrUpdateEventListener.OnSaveOrUpdate(SaveOrUpdateEvent event) at NHibernate.Impl.SessionImpl.FireSave(SaveOrUpdateEvent event) at NHibernate.Impl.SessionImpl.Save(Object obj) NHibernate\Repository\NHibRepository.cs(40,0): at Core.Data.NHibernate.Repository.NHibRepository`1.Add(T item) Repositories\ProjectRepository.cs(30,0): at Data.Repositories.ProjectRepository.SaveAll(IEnumerable`1 projects) LegacyConversion\LegacyBatchUpdater.cs(20,0): at Data.LegacyConversion.LegacyBatchUpdater.ConvertOpenLegacyProjects(ILegacyProjectDao legacyProjectDao, IProjectRepository greenProjectRepository) Data\Brownfield\ProjectBatchUpdate_SQLiteTests.cs(31,0): at .Tests.Data.Brownfield.ProjectBatchUpdate_SQLiteTests.Test()

    Read the article

  • UISearchBar in a UITableView

    - by petert
    I'm trying to mimic the behaviour of a table view like one in the iPod app for Artists - it's a sectioned table view with a section index on the right and with a search bar at the top, but initially hidden when view shown. I am using sdk 3.1.2 and IB, so simply dragged a UISearchDisplayController in to my NIB - it does wire everything up ready for searching. The problem starts because I'm adding the UISearchBar in to the first section of the UITableView, because if I understand correctly I must do this so I can jump to the search bar by touching the search icon in the section index directly? When the table view appears I see the search bar but it has resized and I now have a white block behind the section index at the top. It does'nt take the color of the UISearchBar's surround which interestingly is different to that shown in Interface Builder. Searching around, I did find a tip to add a small navigation bar and a UISearchBar in a UIView, then add this to the table view cell - this works.. BUT the color of the navigation bar's background is what you'd expect normally (gray), not the different color as noted above?! More interesting, if I click the search bar to start a search, then click Cancel, all is fixed!!! The background along the whole tableview cell when the search bar is, is the same!!?! Thanks for any tips.

    Read the article

  • Kanban vs. Scrum

    - by Andrew Siemer
    Can someone with Kanban experience tell me how Kanban and Scrum differ? What are the pro's and con's of each of the different project management methodologies? Kanban seems to be getting a lot of press these days. I don't want to miss the hottest new way of tracking my teams failures (...and successes). Responses @S. Lott - What part of this article wasn't clear enough? infoq.com/articles/hiranabe-lean-agile-kanban/…. Do you have a more specific question? That is a great article but technically no it is not clear enough. That article gives a great amount of detail about kanban (and thank you for it...good read) but it does not specifically contrast Kanban vs. Scrum. That article will help someone like me make a decision but it most certainly won't help someone like my boss or in general someone less experienced! I was hoping for a quick overview of kanban pros and cons contrasted to scrum pros and cons. Thanks though! @S. Lott - Why do you say kanban vs. scrum? What leads you to conclude they are conflicting approaches? Can you make your question more specific? I don't think that they are necessarily conflicting. But they are different enough for a user to adhere to one over the other. Perhaps one fits a project or company better than the other? How would I sell one over the other when presenting a project management approach. Say I went to a company that was currently stuck in the rutt that is "water fall" - why would I sell one approach over the other?

    Read the article

  • ASP.Net FormsAuthentication Redirect Loses the cookie between Redirect and Application_AuthenticateR

    - by Joel Etherton
    I have a FormsAuthentication cookie that is persistent and works independently in a development, test, and production environment. I have a user that can authenticate, the user object is created, the authentication cookie is added to the response: 'Custom object to grab the TLD from the url authCookie.Domain = myTicketModule.GetTopLevelDomain(Request.ServerVariables("HTTP_HOST")) FormsAuthentication.SetAuthCookie(authTicket.Name, False) Response.SetCookie(authCookie) The user gets processed a little bit to check for a first time login, security questions, etc, and is then redirected with the following tidbit: Session.Add("ForceRedirect", "/FirstTimeLogin.aspx") Response.Redirect("~/FirstTimeLogin.aspx", True) With a debug break, I can verify that the cookie collection holds both a cookie not related to authentication that I set for a different purpose and the formsauthentication cookie. Then the next step in the process occurs at the ApplicationAuthenticateRequest in the global.asax: Sub Application_AuthenticateRequest(ByVal sender As Object, ByVal e As EventArgs) Dim formsCookieName As String = myConfigurationManager.AppSettings("FormsCookieName") Dim authCookie As HttpCookie = Request.Cookies(formsCookieName) At this point, for this ONE user authCookie is nothing. I have 15,000 other users who are not impacted in this manner. However, for one user the cookie just vanishes without a trace. I've seen this before with w3wp.exe exceptions, state server exceptions and other IIS process related exceptions, but I'm getting no exceptions in the event log. w3wp.exe is not crashing, the state server has some timeouts but they appear unrelated (as verified by timestamps) and it only happens to this one user on this one domain (this code is used across 2 different TLDs with approximately 10 other subdomains). One avenue I'm investigating is that the cookie might just be too large. I would think that there would be a check for the size of the cookie going into the response, and I wouldn't think it would impact it this way. Any ideas why the request might dumping the cookie? NOTE: The secondary cookie I mentioned that I set also gets dumped (and it's very tiny). EDIT-NOTE: The session token is NOT lost when this happens. However, since the authentication cookie is lost, it is ignored and replaced on a subsequent login.

    Read the article

  • how to 'load data infile' on amazon RDS?

    - by feydr
    not sure if this is a question better suited for serverfault but I've been messing with amazon RDS lately and was having trouble getting 'file' privileges to my web host mysql user. I'd assume that a simple: grant file on *.* to 'webuser@'%'; would work but it does not and I can't seem to do it with my 'root' user as well. What gives? The reason we use load data is because it is super super fast for doing thousands of inserts at once. anyone know how to remedy this or do I need to find a different way? This page, http://docs.amazonwebservices.com/AmazonRDS/latest/DeveloperGuide/index.html?Concepts.DBInstance.html seems to suggest that I need to find a different way around this. Help? UPDATE I'm not trying to import a database -- I just want to use the file load option to insert several hundred-thousand rows at a time. after digging around this is what we have: mysql> grant file on *.* to 'devuser'@'%'; ERROR 1045 (28000): Access denied for user 'root'@'%' (using password: YES) mysql> select User, File_priv, Grant_priv, Super_priv from mysql.user; +----------+-----------+------------+------------+ | User | File_priv | Grant_priv | Super_priv | +----------+-----------+------------+------------+ | rdsadmin | Y | Y | Y | | root | N | Y | N | | devuser | N | N | N | +----------+-----------+------------+------------+

    Read the article

  • Changing an xcode project Path with lowercase path issue

    - by joneswah
    I have a problem with an Xcode project which has a path of "/users/me/blah" with a lowercase 'u'. When I check my other projects (in the General Tab of the Project Info Window the Path starts with a uppercase "Users". This causes a couple of problems. When I try and add an existing file which is "relative to group" or "Relative to Project" it thinks it needs to change directory all the way to the root. For example the path for any included file ends up as "../../../../Users/me/blah" which then prevents the project working on other peoples machine because the "relative" is essentially an absolute path... sigh. The other side effect is that when you select "Add Existing Files" instead of greying out all of the already included files, it leaves ALL of the files available for selection. Because it thinks files in "Users" are different to "users". I have tried re checking out the project into a different directory but no difference. I am not sure how I ended up with the wrong path in the first instance. No doubt something stupid that I did. Anyone have a clue on how I change the project path or resolve this? thanks

    Read the article

  • How to solve validation error on xsi:noNamespaceSchemaLocation in jdoconfig.xml

    - by mamuso
    Since I updated today to GAE 1.7.2.1, I'm having validation errors in eclipse in all my jdoconfig.xml files. I have the default jdoconfig.xml content : [...] <jdoconfig xmlns="http://java.sun.com/xml/ns/jdo/jdoconfig" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://java.sun.com/xml/ns/jdo/jdoconfig"> [...] And eclipse validation throws: Referenced file contains errors (http://java.sun.com/xml/ns/jdo/jdoconfig). For more information, right click on the message in the Problems View and select "Show Details..." When clicking on details I can see a bunch of lines like: s4s-elt-character: Non-whitespace characters are not allowed in schema elements other than 'xs:appinfo' and 'xs:documentation'. Saw 'var_U = "undefined";'. In different lines and different content in "Saw ... " It occurs in every single project I start using the "New Web Application Project..." from the google plugin. So does anyone have this problem? Any fix?

    Read the article

< Previous Page | 408 409 410 411 412 413 414 415 416 417 418 419  | Next Page >