Search Results

Search found 8401 results on 337 pages for 'bad habits'.

Page 83/337 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • Cloud Fact for Business Managers #3: Where You Data Is, and Who Has Access to It Might Surprise You

    - by yaldahhakim
    Written by: David Krauss While data security and operational risk conversations usually happen around the desk of a CCO/CSO (chief compliance and/or security officer), or perhaps the CFO, since business managers are now selecting cloud providers, they need to be able to at least ask some high-level questions on the topic of risk and compliance.  While the report found that 76% of adopters were motivated to adopt cloud apps because of quick access to software, most of these managers found that after they made a purchase decision their access to exciting new capabilities in the cloud could be hindered due to performance and scalability constraints put forth  by their cloud provider.  If you are going to let your business consume their mission critical business applications as a service, then it’s important to understand who is providing those cloud services and what kind of performance you are going to get.  Different types of departments, companies and industries will all have unique requirements so it’s key to take this also into consideration.   Nothing puts a CEO in a bad mood like a public data breach or finding out the company lost money when customers couldn’t buy a product or service because your cloud service provider had a problem.  With 42% of business managers having seen a data security breach in their department associated directly with the use of cloud applications, this is happening more than you think.   We’ve talked about the importance of being able to avoid information silos through a unified cloud approach and platform.  This is also important when keeping your data safe and secure, and a key conversation to have with your cloud provider.  Your customers want to know that their information is protected when they do business with you, just like you want your own company information protected.   This is really hard to do when each line of business is running different cloud application services managed by different cloud providers, all with different processes and controls.   It only adds to the complexity, and the more complex, the more risky and the chance that something will go wrong. What about compliance? Depending on the cloud provider, it can be difficult at best to understand who has access to your data, and were your data is actually stored.  Add to this multiple cloud providers spanning multiple departments and it becomes very problematic when trying to comply with certain industry and country data security regulations.  With 73% of business managers complaining that having cloud data handled externally by one or more cloud vendors makes it hard for their department to be compliant, this is a big time suck for executives and it puts the organization at risk. Is There A Complete, Integrated, Modern Cloud Out there for Business Executives?If you are a business manager looking to drive faster innovation for your business and want a cloud application that your CIO would approve of, I would encourage you take a look at Oracle Cloud.  It’s everything you want from a SaaS based application, but without compromising on functionality and other modern capabilities like embedded business intelligence, social relationship management (for your entire business), and advanced mobile.  And because Oracle Cloud is built and managed by Oracle, you can be confident that your cloud application services are enterprise-grade.  Over 25 Million users and 10 thousands companies around the globe rely on Oracle Cloud application services everyday – maybe your business should too.  For more information, visit cloud.oracle.com. Additional Resources •    Try it: cloud.oracle.com•    Learn more: http://www.oracle.com/us/corporate/features/complete-cloud/index.html•    Research Report: Cloud for Business Managers: The Good, the Bad, and the Ugly

    Read the article

  • Entity System with C++ templates

    - by tommaisey
    I've been getting interested in the Entity/Component style of game programming, and I've come up with a design in C++ which I'd like a critique of. I decided to go with a fairly pure Entity system, where entities are simply an ID number. Components are stored in a series of vectors - one for each Component type. However, I didn't want to have to add boilerplate code for every new Component type I added to the game. Nor did I want to use macros to do this, which frankly scare me. So I've come up with a system based on templates and type hinting. But there are some potential issues I'd like to check before I spend ages writing this (I'm a slow coder!) All Components derive from a Component base class. This base class has a protected constructor, that takes a string parameter. When you write a new derived Component class, you must initialise the base with the name of your new class in a string. When you first instantiate a new DerivedComponent, it adds the string to a static hashmap inside Component mapped to a unique integer id. When you subsequently instantiate more Components of the same type, no action is taken. The result (I think) should be a static hashmap with the name of each class derived from Component that you instantiate at least once, mapped to a unique id, which can by obtained with the static method Component::getTypeId ("DerivedComponent"). Phew. The next important part is TypedComponentList<typename PropertyType>. This is basically just a wrapper to an std::vector<typename PropertyType> with some useful methods. It also contains a hashmap of entity ID numbers to slots in the array so we can find Components by their entity owner. Crucially TypedComponentList<> is derived from the non-template class ComponentList. This allows me to maintain a list of pointers to ComponentList in my main ComponentManager, which actually point to TypedComponentLists with different template parameters (sneaky). The Component manager has template functions such as: template <typename ComponentType> void addProperty (ComponentType& component, int componentTypeId, int entityId) and: template <typename ComponentType> TypedComponentList<ComponentType>* getComponentList (int componentTypeId) which deal with casting from ComponentList to the correct TypedComponentList for you. So to get a list of a particular type of Component you call: TypedComponentList<MyComponent>* list = componentManager.getComponentList<MyComponent> (Component::getTypeId("MyComponent")); Which I'll admit looks pretty ugly. Bad points of the design: If a user of the code writes a new Component class but supplies the wrong string to the base constructor, the whole system will fail. Each time a new Component is instantiated, we must check a hashed string to see if that component type has bee instantiated before. Will probably generate a lot of assembly because of the extensive use of templates. I don't know how well the compiler will be able to minimise this. You could consider the whole system a bit complex - perhaps premature optimisation? But I want to use this code again and again, so I want it to be performant. Good points of the design: Components are stored in typed vectors but they can also be found by using their entity owner id as a hash. This means we can iterate them fast, and minimise cache misses, but also skip straight to the component we need if necessary. We can freely add Components of different types to the system without having to add and manage new Component vectors by hand. What do you think? Do the good points outweigh the bad?

    Read the article

  • Don’t string together XML

    - by KyleBurns
    XML has been a pervasive tool in software development for over a decade.  It provides a way to communicate data in a manner that is simple to understand and free of platform dependencies.  Also pervasive in software development is what I consider to be the anti-pattern of using string manipulation to create XML.  This usually starts with a “quick and dirty” approach because you need an XML document and looks like (for all of the examples here, we’ll assume we’re writing the body of a method intended to take a Contact object and return an XML string): return string.Format("<Contact><BusinessName>{0}</BusinessName></Contact>", contact.BusinessName);   In the code example, I created (or at least believe I created) an XML document representing a simple contact object in one line of code with very little overhead.  Work’s done, right?  No it’s not.  You see, what I didn’t realize was that this code would be used in the real world instead of my fantasy world where I own all the data and can prevent any of it containing problematic values.  If I use this code to create a contact record for the business “Sanford & Son”, any XML parser will be incapable of processing the data because the ampersand is special in XML and should have been encoded as &amp;. Following the pattern that I have seen many times over, my next step as a developer is going to be to do what any developer in his right mind would do – instruct the user that ampersands are “bad” and they cannot be used without breaking computers.  This may work in many cases and is often accompanied by logic at the UI layer of applications to block these “bad” characters, but sooner or later someone is going to figure out that other applications allow for them and will want the same.  This often leads to the creation of “cleaner” functions that perform a replace on the strings for every special character that the person writing the function can think of.  The cleaner function will usually grow over time as support requests reveal characters that were missed in the initial cut.  Sooner or later you end up writing your own somewhat functional XML engine. I have never been told by anyone paying me to write code that they would like to buy a somewhat functional XML engine.  My employer/customer’s needs have always been for something that may use XML, but ultimately is functionality that drives business value. I’m not going to build an XML engine. So how can I generate XML that is always well-formed without writing my own engine?  Easy – use one of the ones provided to you for free!  If you’re in a shop that still supports VB6 applications, you can use the DomDocument or MXXMLWriter object (of the two I prefer MXXMLWriter, but I’m not going to fully describe either here).  For .Net Framework applications prior to the 3.5 framework, the code is a little more verbose than I would like, but easy once you understand what pieces are required:             using (StringWriter sw = new StringWriter())             {                 using (XmlTextWriter writer = new XmlTextWriter(sw))                 {                     writer.WriteStartDocument();                     writer.WriteStartElement("Contact");                     writer.WriteElementString("BusinessName", contact.BusinessName);                     writer.WriteEndElement(); // end Contact element                     writer.WriteEndDocument();                     writer.Flush();                     return sw.ToString();                 }             }   Looking at that code, it’s easy to understand why people are drawn to the initial one-liner.  Lucky for us, the 3.5 .Net Framework added the System.Xml.Linq.XElement object.  This object takes away a lot of the complexity present in the XmlTextWriter approach and allows us to generate the document as follows: return new XElement("Contact", new XElement("BusinessName", contact.BusinessName)).ToString();   While it is very common for people to use string manipulation to create XML, I’ve discussed here reasons not to use this method and introduced powerful APIs that are built into the .Net Framework as an alternative.  I’ve given a very simplistic example here to highlight the most basic XML generation task.  For more information on the XmlTextWriter and XElement APIs, check out the MSDN library.

    Read the article

  • Unable to debug WCF service in VS2008 after UserNamePasswordValidator fault

    - by lsb
    Hi! I have a WCF service that I secure with a custom UserNamePasswordValidator and Message security running over wsHttpBinding. The release code works great. Unfortunately, when I try to run in debug mode after having previously used invalid credentials (the current credentials ARE valid!) VS2008 displays an annoying dialog box (more on this below). A simplified version of my Validate method from the validator might look like the following: public override void Validate(string userName, string password) { if (password != "ABC123") throw new FaultException("The password is invalid!"); } The client receives a MessageSecurityException with InnerException set to the FaultException I explictly threw. This is workable since my client can display the message text of the original FaultException I wanted the user to see. Unfortunately, in all subsequent service calls VS2008 displays an "Unable to automatically debug..." dialog. The only way I can stop this from happening is to exit VS2008, get back in and connect to my service using correct credentials. I should also add that this occurs even when I create a brand new proxy on each and every call. There's no chance MY channel is faulted when I make a call. Its likely, however, that VS2008 hangs on to the previously faulted channel and tries to use it for debugging purposes. Needless to say, this sucks! The entire reason I'm entering "bad" credentials is to test the "bad-credential" handling. Anyway, if anyone has any ideas as to how I can get around this bug (?!?) I'd be very very appreciative....

    Read the article

  • Build 32-bit with 64-bit llvm-gcc

    - by Jay Conrod
    I have a 64-bit version of llvm-gcc, but I want to be able to build both 32-bit and 64-bit binaries. Is there a flag for this? I tried passing -m32 (which works on the regular gcc), but I get an error message like this: [jay@andesite]$ llvm-gcc -m32 test.c -o test Warning: Generation of 64-bit code for a 32-bit processor requested. Warning: 64-bit processors all have at least SSE2. /tmp/cchzYo9t.s: Assembler messages: /tmp/cchzYo9t.s:8: Error: bad register name `%rbp' /tmp/cchzYo9t.s:9: Error: bad register name `%rsp' ... This is backwards; I want to generate 32-bit code for a 64-bit processor! I'm running llvm-gcc 4.2, the one that comes with Ubuntu 9.04 x86-64. EDIT: Here is the relevant part of the output when I run llvm-gcc with the -v flag: [jay@andesite]$ llvm-gcc -v -m32 test.c -o test.bc Using built-in specs. Target: x86_64-linux-gnu Configured with: ../llvm-gcc4.2-2.2.source/configure --host=x86_64-linux-gnu --build=x86_64-linux-gnu --prefix=/usr/lib/llvm/gcc-4.2 --enable-languages=c,c++ --program-prefix=llvm- --enable-llvm=/usr/lib/llvm --enable-threads --disable-nls --disable-shared --disable-multilib --disable-bootstrap Thread model: posix gcc version 4.2.1 (Based on Apple Inc. build 5546) (LLVM build) /usr/lib/llvm/gcc-4.2/libexec/gcc/x86_64-linux-gnu/4.2.1/cc1 -quiet -v -imultilib . test.c -quiet -dumpbase test.c -m32 -mtune=generic -auxbase test -version -o /tmp/ccw6TZY6.s I looked in /usr/lib/llvm/gcc-4.2/libexec/gcc hoping to find another binary, but the only directory there is x86_64-linux-gnu. I will probably look at compiling llvm-gcc from source with appropriate options next.

    Read the article

  • SQL Server Management Studio – tips for improving the TSQL coding process

    - by kristof
    I used to work in a place where a common practice was to use Pair Programming. I remember how many small things we could learn from each other when working together on the code. Picking up new shortcuts, code snippets etc. with time significantly improved our efficiency of writing code. Since I started working with SQL Server I have been left on my own. The best habits I would normally pick from working together with other people which I cannot do now. So here is the question: What are you tips on efficiently writing TSQL code using SQL Server Management Studio? Please keep the tips to 2 – 3 things/shortcuts that you think improve you speed of coding Please stay within the scope of TSQL and SQL Server Management Studio 2005/2008 If the feature is specific to the version of Management Studio please indicate: e.g. “Works with SQL Server 2008 only" Thanks EDIT: I am afraid that I could have been misunderstood by some of you. I am not looking for tips for writing efficient TSQL code but rather for advice on how to efficiently use Management Studio to speed up the coding process itself. The type of answers that I am looking for are: use of templates, keyboard-shortcuts, use of IntelliSense plugins etc. Basically those little things that make the coding experience a bit more efficient and pleasant. Thanks again

    Read the article

  • How to Impersonate a user for a file copy over the network when dns or netbios is not available

    - by Scott Chamberlain
    I have ComputerA on DomainA running as userA needing to copy a very large file to ComputerB on WorkgroupB which has the ip of 192.168.10.2 to a windows share that only userB has write access to. There is no netbios or dns resolving so the computer must be refrenced by IP I first I tried AppDomain.CurrentDomain.SetPrincipalPolicy(System.Security.Principal.PrincipalPolicy.WindowsPrincipal); WindowsIdentity UserB = new WindowsIdentity("192.168.10.2\\UserB", "PasswordB"); //Execption WindowsImpersonationContext contex = UserB.Impersonate() File.Copy(@"d:\bigfile", @"\\192.168.10.2\bifgile"); contex.Undo(); but I get a System.Security.SecurityException "The name provided is not a properly formed account name." So I tried AppDomain.CurrentDomain.SetPrincipalPolicy(System.Security.Principal.PrincipalPolicy.WindowsPrincipal); WindowsIdentity webinfinty = new WindowsIdentity("ComputerB\\UserB", "PasswordB"); //Execption But I get "Logon failure: unknown user name or bad password." error instead. so then I tried IntPtr token; bool succeded = LogonUser("UserB", "192.168.10.2", "PasswordB", LogonTypes.Network, LogonProviders.Default, out token); if (!succeded) { throw new Win32Exception(Marshal.GetLastWin32Error()); } WindowsImpersonationContext contex = WindowsIdentity.Impersonate(token); (...) [DllImport("advapi32.dll", SetLastError = true)] static extern bool LogonUser( string principal, string authority, string password, LogonTypes logonType, LogonProviders logonProvider, out IntPtr token); but LogonUser returns false with the win32 error "Logon failure: unknown user name or bad password" I know my username and password are fine, I have logged on to computerB as that user. Any reccomandations

    Read the article

  • How to handle ViewModel and Database in C#/WPF/MVVM App

    - by Mike B
    I have a task management program with a "Urgency" field. Valid values are Int16 currently mapped to 1 (High), 2 (Medium), 3 (Low), 4 (None) and 99 (Closed). The urgency field is used to rank tasks as well as alter the look of the items in the list and detail view. When a user is editing or adding a new task they select or view the urgency in a ComboBox. A Converter passes Strings to replace the Ints. The urgency collection is so simple I did not make it a table in the database, instead it is a, ObservableCollection(Int16) that is populated by a method. Since the same screen may be used to view a closed task the "Closed" urgency must be in the ItemsSource but I do not want the user to be able to select it. In order to prevent the user from being able to select that item in the ComboBox but still be able to see it if the item in the database has that value should I... Manually disable the item in the ComboBox in code or Xaml (I doubt it) Change the Urgency collection from an Int16 to an Object with a Selectable Property that the isEnabled property of the ComboBoxItem Binds to. Do as in 2 but also separate the urgency information into its own table in the database with a foreign key in the Tasks table None of the above (I suspect this is the correct answer) I ask this because this is a learning project (My first real WPF and first ever MVVM project). I know there is rarely one Right way to do something but I want to make sure I am learning in a reasonable manner since it if far harder to Unlearn bad habits Thanks Mike

    Read the article

  • Workflow for academic research projects, one-step builds, and the Joel Test

    - by Steve
    Working alone on academic research sometimes breeds bad habits. With no one else reading my code, I would write a lot of throw-away code, and I would lose track of intermediate results which, weeks or months later, I wish I had retained. My recent attempts to make my personal workflow conform to the Joel Test raised interesting questions. Academic research has inherently different goals than industrial software development, and therefore some aspects of the Joel Test become less valid. Nevertheless, I find these steps to be still valuable for academic research: Do you use source control? Can you make a build in one step? Do you have an up-to-date schedule? Do you have a spec? Of particular use is the one-step build. I find myself more organized now that I have implemented the following "one-step build": In other words, I have a single script, build.py, that accepts Python code, data, and TeX as inputs. The outputs are results, figures, and a paper with all the results filled in. (Yes, I know "build" is probably not accurate in this context, but you get the idea.) By consolidating many small steps into one, I am not backtracking as much as I used to. ...but I'm sure there is still room for improvement. Question: For research projects, which steps of the Joel Test do you still value? Do you have a one-step build? If so, what does yours consist of, i.e., what inputs does it accept, and what output does it generate?

    Read the article

  • Tiff Analyzer

    - by Kevin
    I am writing a program to convert some data, mainly a bunch of Tiff images. Some of the Tiffs seems to have a minor problem with them. They show up fine in some viewers (Irfanview, client's old system) but not in others (Client's new system, Window's picture and fax viewer). I have manually looked at the binary data and all the tags seem ok. Can anyone recommend an app that can analyze it and tell me what, if anything, is wrong with it? Also, for clarity sake, I'm only converting the data about the images which is stored seperately in a database and copying the images, I'm not editting the images myself, so I'm pretty sure I'm not messing them up. UDPATE: For anyone interested, here are the tags from a good and bad file: BAD Tag Type Length Value 256 Image Width SHORT 1 1652 257 Image Length SHORT 1 704 258 Bits Per Sample SHORT 1 1 259 Compression SHORT 1 4 262 Photometric SHORT 1 0 266 Fill Order SHORT 1 1 273 Strip Offsets LONG 1 210 (d2 Hex) 274 Orientation SHORT 1 3 277 Samples Per Pixel SHORT 1 1 278 Rows Per Strip SHORT 1 450 279 Strip Byte Counts LONG 1 7264 (1c60 Hex) 282 X Resolution RATIONAL 1 <194 200 / 1 = 200.000 283 Y Resolution RATIONAL 1 <202 200 / 1 = 200.000 284 Planar Configuration SHORT 1 1 296 Resolution Unit SHORT 1 2 Good Tag Type Length Value 254 New Subfile Type LONG 1 0 (0 Hex) 256 Image Width SHORT 1 1193 257 Image Length SHORT 1 788 258 Bits Per Sample SHORT 1 1 259 Compression SHORT 1 4 262 Photometric SHORT 1 0 266 Fill Order SHORT 1 1 270 Image Description ASCII 45 256 273 Strip Offsets LONG 1 1118 (45e Hex) 274 Orientation SHORT 1 1 277 Samples Per Pixel SHORT 1 1 278 Rows Per Strip LONG 1 788 (314 Hex) 279 Strip Byte Counts LONG 1 496 (1f0 Hex) 280 Min Sample Value SHORT 1 0 281 Max Sample Value SHORT 1 1 282 X Resolution RATIONAL 1 <301 200 / 1 = 200.000 283 Y Resolution RATIONAL 1 <309 200 / 1 = 200.000 284 Planar Configuration SHORT 1 1 293 Group 4 Options LONG 1 0 (0 Hex) 296 Resolution Unit SHORT 1 2

    Read the article

  • Why is software quality so problematic?

    - by Yuval A
    Even when viewing the subject in the most objective way possible, it is clear that software, as a product, generally suffers from low quality. Take for example a house built from scratch. Usually, the house will function as it is supposed to. It will stand for many years to come, the roof will support heavy weather conditions, the doors and the windows will do their job, the foundations will not collapse even when the house is fully populated. Sure, minor problemsdo occur, like a leaking faucet or a bad paint job, but these are not critical. Software, on the other hand is much more susceptible to suffer from bad quality: unexpected crashes, erroneous behavior, miscellaneous bugs, etc. Sure, there are many software projects and products which show high quality and are very reliable. But lots of software products do not fall in this category. Take into consideration paradigms like TDD which its popularity is on the rise in the past few years. Why is this? Why do people have to fear that their software will not work or crash? (Do you walk into a house fearing its foundations will collapse?) Why is software - subjectively - so full of bugs? Possible reasons: Modern software engineering exists for only a few decades, a small time period compared to other forms of engineering/production. Software is very complicated with layers upon layers of complexity, integrating them all is not trivial. Software development is relatively easy to start with, anyone can write a simple program on his PC, which leads to amateur software leaking into the market. Tight budgets and timeframes do not allow complete and high quality development and extensive testing. How do you explain this issue, and do you see software quality advancing in the near future?

    Read the article

  • CSS overflow and word-wrap behaviour not helping me at all

    - by henriquev
    You can see how the filename field should look at http://www.plifk.com/henvic/114 and how it breaks the layout at http://www.plifk.com/henvic/159 If I used 108574main-neutron-star-and-a-very-bad-overfow-will-happen-here-so-sad.mpg I would not get an overflow, but in the first line "108574main-neutron-star-and-a-very-" and in the second line a "bad-overfow-happens.mpg". What can I do to avoid getting an overflow? Please know that I don't want to use quirks (like PHP's wordwrap, neither JavaScript if possible) and I've tried some ways in CSS with word-wrap, etc, but nothing worked out. I've also tried word-break: break-all (tested on Firefox only) but it didn't work also. Even the overflow: hidden; is not working... I'm not very familiar with web designing (indeed I try to do everything by the standards, etc) and I'm completely lost right now. The uncompressed CSS file can be seen at http://pastebin.ca/1802451 Now... I really understand that this is expected once the word-wrap is supposed for text, not characters. But hey, even with break-all it doesn't do anything. How can? Thank you very much in advance.

    Read the article

  • code review: Is it subjective or objective(quantifiable) ?

    - by Ram
    I am putting together some guidelines for code reviews. We do not have one formal process yet, and trying to formalize it. And our team is geographically distributed We are using TFS for source control (used it for tasks/bug tracking/project management as well, but migrated that to JIRA) with VS2008 for development. What are the things you look for when doing a code review ? These are the things I came up with Enforce FXCop rules (we are a Microsoft shop) Check for performance (any tools ?) and security (thinking about using OWASP- code crawler) and thread safety Adhere to naming conventions The code should cover edge cases and boundaries conditions Should handle exceptions correctly (do not swallow exceptions) Check if the functionality is duplicated elsewhere method body should be small(20-30 lines) , and methods should do one thing and one thing only (no side effects/ avoid temporal coupling -) Do not pass/return nulls in methods Avoid dead code Document public and protected methods/properties/variables What other things do you generally look for ? I am trying to see if we can quantify the review process (it would produce identical output when reviewed by different persons) Example: Saying "the method body should be no longer than 20-30 lines of code" as opposed to saying "the method body should be small" Or is code review very subjective ( and would differ from one reviewer to another ) ? The objective is to have a marking system (say -1 point for each FXCop rule violation,-2 points for not following naming conventions,2 point for refactoring etc) so that developers would be more careful when they check in their code.This way, we can identify developers who are consistently writing good/bad code.The goal is to have the reviewer spend about 30 minutes max, to do a review (I know this is subjective, considering the fact that the changeset/revision might include multiple files/huge changes to the existing architecture etc , but you get the general idea, the reviewer should not spend days reviewing someone's code) What other objective/quantifiable system do you follow to identify good/bad code written by developers? Book reference: Clean Code: A handbook of agile software craftmanship by Robert Martin

    Read the article

  • Is VB Really Case Insensitive?

    - by Otaku
    I'm not trying to start an argument here, but for whatever reason it's typically stated that VB is case insensitive and C languages aren't (and somehow that is a good thing). But here's my question: Where exactly is VB case insensitive? When I type... Dim ss As String Dim SS As String ...into the VS2008 IDE the second one has a warning of "Local variable 'SS' is already declared in the current block". In VBA VBE, it doesn't immediately kick an error, but rather just auto-corrects the case. Am I missing something here with this argument that VB is not case sensitive? (Also, if you know or care to answer, why would that be a bad thing?) EDIT: Why am I even asking this question? I've used VB in many of it's dialects for years now, sometimes as a hobbyist, sometimes for small business-related programs in a workgroup. As of the last 6 months I've been working on a big project, much bigger than I anticipated. Much of the sample source code out there is in C#. I don't have any burning desire to learn C#, but if there are things I'm missing out on that C# offers that VB doesn't (an opposite would be VB.NET offers XML Literals), then I'd like to know more about that feature. So in this case, it's often argued that C languages are case sensitive and that's good and VB is case insensitive and that is bad. I'd like to know A) how exactly is VB case insensitive because every single example in the code editor becomes case sensititive (meaning case gets corrected) whether I want it or not and B) is this compelling enough for me to consider moving to C# if VB.NET case is somehow limiting what I could do with code?

    Read the article

  • Command /Developer/usr/bin/dsymutil failed with exit code 10

    - by Evan Robinson
    I am getting the above error message randomly (so far as I can tell) on iPhone projects. Occasionally it will go away upon either: Clean Restart XCode Reboot Reinstall XCode But sometimes it won't. When it won't the only solution I have found is to take all the source material, import it into a new project, and then redo all the connections in IB. Then I'm good until it strikes again. Anybody have any suggestions? [update 20091030] I have tried building both debug and release versions, both full and lite versions. I've also tried switching the debug symbols from DWARF with external dSYM file to DWARF and to stabs. Clean builds in all formats make no differences. Permission repairs change nothing. Setting up a new user has no effect. Same error on the builds. Thanks for the suggestions! [Update 20091031] Here's an easier and (apparently) reliable workaround. It hinges upon the discovery that the problem is linked to a target not a project In the same project file, create a new target Option-Drag (copy) all the files from the BAD target 'Copy Bundle Resources' folder to the NEW target 'Copy Bundle Resources' folder Repeat (2) with 'Compile Sources' and 'Link Binary With Libraries' Duplicate the Info.plist file for the BAD target and name it correctly for the NEW target. Build the NEW target! [Update 20100222] Apparently an IDE bug, now apparently fixed, although Apple does not allow direct access to the original bug of a duplicate. I can no longer reproduce this behaviour, so hopefully it is dead, dead, dead.

    Read the article

  • Android RestTemplate Ok on emulator but fails on real device

    - by Hossein
    I'm using spring RestTemplate and it works perfect on emulator but if I run my app on real device I get HttpMessageNotWritableException ............ nested exception is java.net.SocketException: Broken pipe Here is some lines of my code(keep in mind my app works perfect on emulator) ............ LoggerUtil.logToFile(TAG, "url is [" + url + "]"); LoggerUtil.logToFile(TAG, "NetworkInfo - " + connectivityManager.getActiveNetworkInfo()); ResponseEntity<T> responseEntity = restTemplate.exchange(url, HttpMethod.POST, requestEntity, clazz); ............. I know my device's network works perfect because all other applications on my device are working and also using device browser I'm able to connect to my server so my server is available. My server is up and running and my device is able to connect to my server so why I get java.net.SocketException: Broken pipe ?!!!!!!! Before I call restTemplate.exchange() I log NetworkInfo and it looks ok -type: WIFI -status: CONNECTED/CONNECTED -isAvailable: true Thanks in advance. Update: It is really weird Even if I use HttpURLConnection, it works perfectly on emulator but on real device I get 400 Bad Request Here is my code HttpURLConnection con = null; try { String url = ....; LoggerUtil.logToFile(TAG, "url [" + url + "]" ); con = (HttpURLConnection) new URL(url).openConnection(); con.setRequestMethod("POST"); con.setRequestProperty("Connection", "Keep-Alive"); con.setDoInput(true); con.setDoOutput(true); con.setUseCaches(false); con.connect(); LoggerUtil.logToFile(TAG, "con.getResponseCode is " + con.getResponseCode()); LoggerUtil.logToFile(TAG, "con.getResponseMessage is " + con.getResponseMessage()); } catch(Throwable t){ LoggerUtil.logToFile(TAG, "*** failed [" + t + "]" ); } in log file I see con.getResponseCode is 400 con.getResponseMessage is Bad Request

    Read the article

  • Cloning git repository from svn repository, results in file-less, remote-branch-less git repo.

    - by Tchalvak
    Working SVN repo I'm starting a git repo to interact with a svn repo. The svn repository is set and working fine, with a single commit of a basic README file in it. Checking it out works fine: tchalvak:~/test/svn-test$ svn checkout --username=myUsernameHere http://www.url.to/project/here/charityweb/ A charityweb/README Checked out revision 1. Failed git-svn clone of svn repo When I try to clone the repository in git, the first step shows no errors... tchalvak:~/test$ git svn clone -s --username=myUserNameHere http://www.url.to/project/here/charityweb/ Initialized empty Git repository in /home/tchalvak/test/charityweb/.git/ Authentication realm: <http://www.url.to/project/here:80> Charity Web Password for 'myUserNameHere': ...but results in a useless folder: tchalvak:~/test$ ls charityweb tchalvak:~/test$ cd charityweb/ tchalvak:~/test/charityweb$ ls tchalvak:~/test/charityweb$ ls -al total 12 drwxr-xr-x 3 tchalvak tchalvak 4096 2010-04-02 13:46 . drwxr-xr-x 4 tchalvak tchalvak 4096 2010-04-02 13:46 .. drwxr-xr-x 8 tchalvak tchalvak 4096 2010-04-02 13:47 .git tchalvak:~/test/charityweb$ git branch -av tchalvak:~/test/charityweb$ git status # On branch master # # Initial commit # nothing to commit (create/copy files and use "git add" to track) tchalvak:~/test/charityweb$ git fetch fatal: Where do you want to fetch from today? tchalvak:~/test/charityweb$ git rebase origin/master fatal: bad revision 'HEAD' fatal: Needed a single revision invalid upstream origin/master tchalvak:~/test/charityweb$ git log fatal: bad default revision 'HEAD' How do I get something I can commit back to? I expect I'm doing something wrong in this process, but what?

    Read the article

  • Blueprint CSS and Separation of Presentation and Content When Designing Forms

    - by Merritt
    Is it possible to use Blueprint CSS and maintain a a respectable level of separation between presentation and content? I like how easy the framework is to use when designing forms, but am worried that the manner in which I use the css classes for columnizing elements is a bad practice. For instance, say I have a 3 field form designed using blueprint: <div class="container"> <form action="" method="post" class="inline"> <fieldset> <legend>Example</legend> <div class="span-3"> <label for="a">Label A:</label> <input type="text" class="text" id="a" name="a" > </div> <div class="span-2"> <label for="b">Label B:</label> <input type="text" class="text" id="b" name="b" > </div> <div class="span-3"> <label for="o">Label O:</label> <input type="checkbox" id="o" name="o" value="true" checked="checked" class="checkbox">checkbox one </div> <div class="span-2 last"> <input type="submit" value="submit" class="button"> </div> </fieldset> </form> </div> Is using a class attribute with names like "span-2", "inline", and "last" a bad practice? Or am I missing the point?

    Read the article

  • Deploying software on compromised machines

    - by Martin
    I've been involved in a discussion about how to build internet voting software for a general election. We've reached a general consensus that there exist plenty of secure methods for two way authentication and communication. However, someone came along and pointed out that in a general election some of the machines being used are almost certainly going to be compromised. To quote: Let me be an evil electoral fraudster. I want to sample peoples votes as they vote and hope I get something scandalous. I hire a bot-net from some really shady dudes who control 1000 compromised machines in the UK just for election day. I capture the voting habits of 1000 voters on election day. I notice 5 of them have voted BNP. I look these users up and check out their machines, I look through their documents on their machine and find out their names and addresses. I find out one of them is the wife of a tory MP. I leak 'wife of tory mp is a fascist!' to some blogger I know. It hits the internet and goes viral, swings an election. That's a serious problem! So, what are the best techniques for running software where user interactions with the software must be kept secret, on a machine which is possibly compromised?

    Read the article

  • SQL Exception: "Impersonate Session Security Context" cannot be called in this batch because a simul

    - by kasey
    When opening a connection to SQL Server 2005 from our web app, we occasionally see this error: "Impersonate Session Security Context" cannot be called in this batch because a simultaneous batch has called it. We use MARS and connection pooling. The exception originates from the following piece of code: protected SqlConnection Open() { SqlConnection connection = new SqlConnection(); connection.ConnectionString = m_ConnectionString; if (connection != null) { try { connection.Open(); if (m_ExecuteAsUserName != null) { string sql = Format("EXECUTE AS LOGIN = {0};", m_ExecuteAsUserName); ExecuteCommand(connection, sql); } } catch (Exception exception) { connection.Close(); connection = null; } } return connection; } I found an MS Connect article which suggests that the error is caused when a previous command has not yet terminated before the EXECUTE AS LOGIN command is sent. Yet how can this be if the connection has only just been opened? Could this be something to do with connection pooling interacting strangely with MARS? UPDATE: For the short-term we have implemented a workaround by clearing out the connection pool whenever this happens, to get rid of the bad connection, as it otherwise keeps getting handed back to various users. (Not too bad as this only happens a couple of times a day.) But if anyone has any further ideas, we are still looking out for a real solution...

    Read the article

  • Large number of simultaneous long-running operations in Qt

    - by Hostile Fork
    I have some long-running operations that number in the hundreds. At the moment they are each on their own thread. My main goal in using threads is not to speed these operations up. The more important thing in this case is that they appear to run simultaneously. I'm aware of cooperative multitasking and fibers. However, I'm trying to avoid anything that would require touching the code in the operations, e.g. peppering them with things like yieldToScheduler(). I also don't want to prescribe that these routines be stylized to be coded to emit queues of bite-sized task items...I want to treat them as black boxes. For the moment I can live with these downsides: Maximum # of threads tend to be O(1000) Cost per thread is O(1MB) To address the bad cache performance due to context-switches, I did have the idea of a timer which would juggle the priorities such that only idealThreadCount() threads were ever at Normal priority, with all the rest set to Idle. This would let me widen the timeslices, which would mean fewer context switches and still be okay for my purposes. Question #1: Is that a good idea at all? One certain downside is it won't work on Linux (docs say no QThread::setPriority() there). Question #2: Any other ideas or approaches? Is QtConcurrent thinking about this scenario? (Some related reading: how-many-threads-does-it-take-to-make-them-a-bad-choice, many-threads-or-as-few-threads-as-possible, maximum-number-of-threads-per-process-in-linux)

    Read the article

  • FindBugs controversial description

    - by Tom Brito
    Am I understanding it wrong, or is the description wrong? Equals checks for noncompatible operand (EQ_CHECK_FOR_OPERAND_NOT_COMPATIBLE_WITH_THIS) This equals method is checking to see if the argument is some incompatible type (i.e., a class that is neither a supertype nor subtype of the class that defines the equals method). For example, the Foo class might have an equals method that looks like: public boolean equals(Object o) { if (o instanceof Foo) return name.equals(((Foo)o).name); else if (o instanceof String) return name.equals(o); else return false; This is considered bad practice, as it makes it very hard to implement an equals method that is symmetric and transitive. Without those properties, very unexpected behavoirs are possible. From: http://findbugs.sourceforge.net/bugDescriptions.html#EQ_CHECK_FOR_OPERAND_NOT_COMPATIBLE_WITH_THIS The description says that the Foo class might have an equals method like that, and after it says that "This is considered bad practice". I'm not getting the "right way".. How should the following method be to be right? @Override public boolean equals(Object obj) { if (obj instanceof DefaultTableModel) return model.equals((DefaultTableModel)obj); else return false; }

    Read the article

  • Environment variable names with parentheses, like %ProgramFiles(x86)%, in PowerShell?

    - by jwfearn
    How does one get the value of environment variable whose name contains parentheses in a PowerShell script? To complicate matters, some variables names contains parentheses while others have similar names without parenteses. For example (using cmd.exe): C:\>set | find "ProgramFiles" CommonProgramFiles=C:\Program Files\Common Files CommonProgramFiles(x86)=C:\Program Files (x86)\Common Files ProgramFiles=C:\Program Files ProgramFiles(x86)=C:\Program Files (x86) We see that %ProgramFiles% is not the same as %ProgramFiles(x86)%. My PowerShell code is failing in a weird way because it's ignoring the part of the environment variable name after the parentheses. Since this happens to match the name of a different, but existing, environment variable I don't fail, I just get the right value of the wrong variable. Here's a test function in the PowerShell scripting language to illustrate my problem: function Do-Test { $ok = "C:\Program Files (x86)" # note space between 's' and '( $bad = "$Env:ProgramFiles" + "(x86)" # uses %ProgramFiles% $bin32 = "$Env:ProgramFiles(x86)" # LINE 6, I want to use %ProgramFiles(x86)% if ( $bin32 -eq $ok ) { Write-Output "Pass" } elseif ( $bin32 -eq $bad ) { Write-Output "Fail: %ProgramFiles% used instead of %ProgramFiles(x86)%" } else { Write-Output "Fail: some other reason" } } And here's the output: PS> Do-Test Fail: %ProgramFiles% used instead of %ProgramFiles(x86)% Is there a simple change I can make to line 6 above to get the correct value of %ProgramFiles(x86)%? *NOTE: In the text of this post I am using batch file syntax for environment variables as a convenient shorthand. For example %SOME_VARIABLE% means "the value of the environment variable whose name is SOME_VARIABLE". If I knew the properly escaped syntax in PowerShell, I wouldn't need to ask this question.*

    Read the article

  • C# - WinForms - Exception Handling for Events

    - by JustLooking
    Hi all, I apologize if this is a simple question (my Google-Fu may be bad today). Imagine this WinForms application, that has this type of design: Main application - shows one dialog - that 1st dialog can show another dialog. Both of the dialogs have OK/Cancel buttons (data entry). I'm trying to figure out some type of global exception handling, along the lines of Application.ThreadException. What I mean is: Each of the dialogs will have a few event handlers. The 2nd dialog may have: private void ComboBox_SelectedIndexChanged(object sender, EventArgs e) { try { AllSelectedIndexChangedCodeInThisFunction(); } catch(Exception ex) { btnOK.enabled = false; // Bad things, let's not let them save // log stuff, and other good things } } Really, all the event handlers in this dialog should be handled in this way. It's an exceptional-case, so I just want to log all the pertinent information, show a message, and disable the okay button for that dialog. But, I want to avoid a try/catch in each event handler (if I could). A draw-back of all these try/catch's is this: private void someFunction() { // If an exception occurs in SelectedIndexChanged, // it doesn't propagate to this function combobox.selectedIndex = 3; } I don't believe that Application.ThreadException is a solution, because I don't want the exception to fall all the way-back to the 1st dialog and then the main app. I don't want to close the app down, I just want to log it, display a message, and let them cancel out of the dialog. They can decide what to do from there (maybe go somewhere else in the app). Basically, a "global handler" in between the 1st dialog and the 2nd (and then, I suppose, another "global handler" in between the main app and the 1st dialog). Thanks.

    Read the article

  • FindBugs controversal description

    - by Tom Brito
    Am I understanding it wrong, or is the description wrong? Equals checks for noncompatible operand (EQ_CHECK_FOR_OPERAND_NOT_COMPATIBLE_WITH_THIS) This equals method is checking to see if the argument is some incompatible type (i.e., a class that is neither a supertype nor subtype of the class that defines the equals method). For example, the Foo class might have an equals method that looks like: public boolean equals(Object o) { if (o instanceof Foo) return name.equals(((Foo)o).name); else if (o instanceof String) return name.equals(o); else return false; This is considered bad practice, as it makes it very hard to implement an equals method that is symmetric and transitive. Without those properties, very unexpected behavoirs are possible. From: http://findbugs.sourceforge.net/bugDescriptions.html#EQ_CHECK_FOR_OPERAND_NOT_COMPATIBLE_WITH_THIS The description says that the Foo class might have an aquals method like that, and after it says that "This is considered bad practice". I'm not getting the "right way".. How should the following method be to be right? @Override public boolean equals(Object obj) { if (obj instanceof DefaultTableModel) return model.equals((DefaultTableModel)obj); else return false; }

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >