Search Results

Search found 29753 results on 1191 pages for 'best practices'.

Page 284/1191 | < Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >

  • Given a 2d array sorted in increasing order from left to right and top to bottom, what is the best w

    - by Phukab
    I was recently given this interview question and I'm curious what a good solution to it would be. Say I'm given a 2d array where all the numbers in the array are in increasing order from left to right and top to bottom. What is the best way to search and determine if a target number is in the array? Now, my first inclination is to utilize a binary search since my data is sorted. I can determine if a number is in a single row in O(log N) time. However, it is the 2 directions that throw me off. Another solution I could use, if I could be sure the matrix is n x n, is to start at the middle. If the middle value is less than my target, then I can be sure it is in the left square portion of the matrix from the middle. I then move diagnally and check again, reducing the size of the square that the target could potentially be in until I have honed in on the target number. Does anyone have any good ideas on solving this problem? Example array: Sorted left to right, top to bottom. 1 2 4 5 6 2 3 5 7 8 4 6 8 9 10 5 8 9 10 11

    Read the article

  • What is the best way to create a running integer id on the AppEngine data storage?

    - by Freed
    For various reasons, I need a unique running integer id for my entities stored on the Google AppEngine. The automatically generated key sort of has this behaviour, but it doesn't start from 1 (or 0) and doesn't guarantee that the generated integer part will come from a continuous sequence. What would be the best way to efficiently implement this on AppEngine? Is there any support from the storage system? To add to the complexity, I might need to do this over entities from different entity groups, meaning I can't just get the highest id right now and save an entity with the next id in a transaction. Might memcache be the way to go..? Edit: I havn't yet implemented this, but to clarify on the memcache idea. I know memcache is unreliable, but in practice it probably won't lose data "too often" to hurt performance. Basically, I would have a memcache entry for the last used id, update it (somehow atomically) whenever I create a new entity and use that id. In the case of memcache not having a value for this entry, I'd get the highest id so far by doing a query on my entities sorted by the id and update memcache (unless someone else had already done so). The only problem I can see with this right now would be atomicity of the operation as a whole if the save of my new entity was also part of a transaction. Thoughts..?

    Read the article

  • What are the best websites/web applications for specific languages?

    - by JM4
    Browsing around Stackoverflow, I get overwhelmed with the number of "Why should I learn Python/Ruby/PHP/.Net/jQuery..." and the list goes on. Most answers, although good, are usually battles from language A programmer to language B programmer detailing why one piece sucks more than another. People can discuss the theoretical benefits of one over another but in the end, money/glitz talks and the rest walks. I am more interested in the potential opportunity that can come from one language or another over others. A little background, I am a project manager turned novice 'programmer' out of corporate necessity within the small company I currently work with so I have both relatively no set preference or experience, more out of curiosity. While I realize all are not created equal and for similar things, I think it would be interesting to start a list of the best websites / web applications built on specific languages/frameworks just to highlight the possibilities with each and give somebody like me motivation to say "How the heck was that done? Time to buy a book/take a class and learn." Tell me and I will forget, Show me and I will learn, Involve me and I will understand - Teton Lakota

    Read the article

  • What is the best way to browse the web safely? [closed]

    - by cedivad
    At the recent pwnown we saw every single browser, from IE to Chrome, miserably hacked. That scares me. How should we browse the Internet safely but continuing to enjoy it? (using lynx is not an option) Virtual machines? Different users with non-administrative privileges? Keep the work and "Facebook" on 2 separate machines? (or on 2 hard disks, invisible each other?) I think that they should write a book on the matter.

    Read the article

  • Is this the best way to grab common elements from a Hash of arrays?

    - by Hulihan Applications
    I'm trying to get a common element from a group of arrays in Ruby. Normally, you can use the & operator to compare two arrays, which returns elements that are present or common in both arrays. This is all good, except when you're trying to get common elements from more than two arrays. However, I want to get common elements from an unknown, dynamic number of arrays, which are stored in a hash. I had to resort to using the eval() method in ruby, which executes a string as actual code. Here's the function I wrote: def get_common_elements_for_hash_of_arrays(hash) # get an array of common elements contained in a hash of arrays, for every array in the hash. # ["1","2","3"] & ["2","4","5"] & ["2","5","6"] # => ["2"] # eval("[\"1\",\"2\",\"3\"] & [\"2\",\"4\",\"5\"] & [\"2\",\"5\",\"6\"]") # => ["2"] eval_string_array = Array.new # an array to store strings of Arrays, ie: "[\"2\",\"5\",\"6\"]", which we will join with & to get all common elements hash.each do |key, array| eval_string_array << array.inspect end eval_string = eval_string_array.join(" & ") # create eval string delimited with a & so we can get common values return eval(eval_string) end example_hash = {:item_0 => ["1","2","3"], :item_1 => ["2","4","5"], :item_2 => ["2","5","6"] } puts get_common_elements_for_hash_of_arrays(example_hash) # => 2 This works and is great, but I'm wondering...eval, really? Is this the best way to do it? Are there even any other ways to accomplish this(besides a recursive function, of course). If anyone has any suggestions, I'm all ears. Otherwise, Feel free to use this code if you need to grab a common item or element from a group or hash of arrays, this code can also easily be adapted to search an array of arrays.

    Read the article

  • Best way to syn a file between 2 or more drives?

    - by jasondavis
    I have a special file that I edit daily, it is somewhat like a large text file but a little more to it then that. I have a copy on my main desktop and a copy of the file on a USB drive as well. I would like a way to open up either file (from the USB drive or from my desktop drive) and be able to edit and save the file and have it stay updated on both drives. What is a lightweight and easy method of doing this? I do not need anything fancy

    Read the article

  • What is the best way/tool to analyze raw data(network stats) from Simulation?

    - by user90500
    After running a simulation(using a simulator(QualNet)) of a simulated network I end up with ip stats stored in a database, I then extract the data to a csv file So now I have 750mb of raw network stats(time stamp, packet id, source ip, source port, protocol, etc). What are the common ways of analyzing large amounts of data like above, if you want to know things like packet loss, throughput, delay, congestion, etc.

    Read the article

  • Is this the best way to grab Common element from a Hash of arrays?

    - by Hulihan Applications
    I'm trying to get a common element from a group of arrays in Ruby. Normally, you can use the & operator to compare two arrays, which returns elements that are present or common in both arrays. This is all good, except when you're trying to get common elements from more than two arrays. However, I want to get common elements from an unknown, dynamic number of arrays, which are stored in a hash. I had to resort to using the eval() method in ruby, which executes a string as actual code. Here's the function I wrote: def get_common_elements_for_hash_of_arrays(hash) # get an array of common elements contained in a hash of arrays, for every array in the hash. # ["1","2","3"] & ["2","4","5"] & ["2","5","6"] # => ["2"] # eval("[\"1\",\"2\",\"3\"] & [\"2\",\"4\",\"5\"] & [\"2\",\"5\",\"6\"]") # => ["2"] eval_string_array = Array.new # an array to store strings of Arrays, ie: "[\"2\",\"5\",\"6\"]", which we will join with & to get all common elements hash.each do |key, array| eval_string_array << array.inspect end eval_string = eval_string_array.join(" & ") # create eval string delimited with a & so we can get common values return eval(eval_string) end example_hash = {:item_0 => ["1","2","3"], :item_1 => ["2","4","5"], :item_2 => ["2","5","6"] } puts get_common_elements_for_hash_of_arrays(example_hash) # => 2 This works and is great, but I'm wondering...eval, really? Is this the best way to do it? Are there even any other ways to accomplish this(besides a recursive function, of course). If anyone has any suggestions, I'm all ears. Otherwise, Feel free to use this code if you need to grab a common item or element from a group or hash of arrays, this code can also easily be adapted to search an array of arrays.

    Read the article

  • Resolve Instructional Webcast Series

    - by Get Proactive Customer Adoption Team
    Untitled Document Catch the Express—Register for an Instructional Webcast Oracle Proactive Support’s ‘Get Proactive’ message to customers underscores the benefits they’ll obtain by leveraging the Prevent, Resolve and Upgrade capabilities available across the suite of Oracle Products. Our goal in Proactive Support is to show customers how to ‘Get Proactive’ and achieve success by leveraging the latest tools, knowledge, and best practices available to manage your applications and technology more proactively. Most importantly, we want to ensure that customers are proficient in the use of these proactive capabilities. To help you gain this proficiency, we’ve recently launched a series of instructional webcasts that we call the “Resolve Series.” This series consists of both live and on-demand webcasts, and features some of the key proactive capabilities that customers can leverage to resolve their own problems. We launched the first phase of the series in July, and focused on finding answers using the My Oracle Support portal. Among the topics covered in those sessions were best practices for searching the knowledge base, leveraging communities to find answers faster, and other proactive features of My Oracle Support The second phase of the series is set to kick off in September. This phase will include product specific sessions designed to provide customers who use the product with the skills and knowledge required to leverage some of the most important capabilities found under the “RESOLVE” category of our proactive portfolio on My Oracle Support. These webcasts will feature Subject Matter Experts demonstrating how to use the tools and capabilities, discussing best practices, and providing answers to any questions you might have. In addition, hands-on labs will be included in some of the sessions, allowing you to practice applying what you’ve just learned. Whether you are a new customer or you’ve worked with Oracle Support for years, you’ll discover new information and techniques to help you work more efficiently and keep your systems running smoothly. Leverage this opportunity to learn best practices and get the inside track on finding answers fast by using the right tools at the right time. Make sure to take advantage of these webcasts and maximize the value you receive from your Oracle Premier Support investment. See the full schedule of events and register for sessions.

    Read the article

  • Page appears indexed in Google but not findable for any search terms?

    - by Jeff Atwood
    (Note that I am going to use screenshots here because I suspect writing about this will change the behavior over time.) If you do a Google search for uiviewcontroller best practices either with or without the quotes, you end up with results like this: Note that none of these pages resolve to the actual Stack Overflow question containing those words in the title. They resolve to either a) sites that are mirroring our creative commons data and correctly pointing back to the source question without nofollow, as properly specified by our attribution requirements or b) our own internal links to the question, but not the actual question itself. The actual page with the title ... Custom UIView and UIViewController best practices? ... does exist at this URL ... http://stackoverflow.com/questions/3300183/custom-uiview-and-uiviewcontroller-best-practices ... and apparently it is present in Google's index! But why does it not appear when we search for uiviewcontroller best practices ? We know that Google contains this page in its index Our search terms match the title of the question Stack Overflow has much higher pagerank than the other sites that are mirroring this question under Creative Commons I don't get it. What are we doing wrong here?

    Read the article

  • What Is The Best Database For Delphi Desktop Applications That Supports Stored Procedures?

    - by Cape Cod Gunny
    I started with Turbo Pascal 3, went to TP5, Bought TP6 called Borland the next day and downgraded to TP5.5. Bought Delphi 3, and now have Delphi 5 Enterprise. I sort of lost interest in writing code about 4-5 years ago for two reasons; Spent all day writing ASP & SQL for someone else. PC Techniques magazine went away. I've got a few programs in the shareware market that are solid performers but are in need of serious updating. I love Delphi or did when it was Borland (before Borland bought DBase and all the other crap), I'd like to salvage as much of my D5E code as possible but I doubt I can. I plan on upgrading to Delphi 2010. My next software release needs to interact with a database. I'm very proficient with MS Sql and like to put all of the database code in stored procedures. What is the best database choice that interacts well with Delphi, allows stored procedures and is so easy to deploy that even the Geico gecko could deploy it? 10/25/2009 18:53 PM EST Re-Opened After Reading Install Docs for Delphi 2010 I downloaded a trial version of Delphi 2010 and unzipped the install. I've been reading the install docs included in the package. I started with the install.htm inside the zip package. install.htm wisely tells you to see the following two articles: Installation Notes: http://edn.embarcadero.com/article/39754 Release Notes: http://edn.embarcadero.com/article/39758 the release notes state the following... MSSQL driver requires the installation of the SQL Native Client. SQL Native Client 2008 is required for dbxmss.dll. SQL Native Client 2005 is required for dbxmss9.dll I checked my machine to see if SQL Native Client is installed. Nope. I wasn't done reading the docs so I made a note to install SQL Native Client. I googled dbxmss.dll and dbxmss9.dll and found a very interesting thread on the Embarcadero forums. read thread here. After reading this thread and some careful thought I don't think I will be using Microsoft SQL Express. I can't rely on my customers having the right drivers installed. So, I'm back to looking for a different solution. If I'm selling a $40 product to the general masses I need to have a bulletproof solution that doesn't require my brand new customer to update their machine before my software will work.

    Read the article

  • Which HTTP redirect status code is best for this REST API scenario?

    - by Aseem Kishore
    I'm working on a REST API. The key objects ("nouns") are "items", and each item has a unique ID. E.g. to get info on the item with ID foo: GET http://api.example.com/v1/item/foo New items can be created, but the client doesn't get to pick the ID. Instead, the client sends some info that represents that item. So to create a new item: POST http://api.example.com/v1/item/ hello=world&hokey=pokey With that command, the server checks if we already have an item for the info hello=world&hokey=pokey. So there are two cases here. Case 1: the item doesn't exist; it's created. This case is easy. 201 Created Location: http://api.example.com/v1/item/bar Case 2: the item already exists. Here's where I'm struggling... not sure what's the best redirect code to use. 301 Moved Permanently? 302 Found? 303 See Other? 307 Temporary Redirect? Location: http://api.example.com/v1/item/foo I've studied the Wikipedia descriptions and RFC 2616, and none of these seem to be perfect. Here are the specific characteristics I'm looking for in this case: The redirect is permanent, as the ID will never change. So for efficiency, the client can and should make all future requests to the ID endpoint directly. This suggests 301, as the other three are meant to be temporary. The redirect should use GET, even though this request is POST. This suggests 303, as all others are technically supposed to re-use the POST method. In practice, browsers will use GET for 301 and 302, but this is a REST API, not a website meant to be used by regular users in browsers. It should be broadly usable and easy to play with. Specifically, 303 is HTTP/1.1 whereas 301 and 302 are HTTP/1.0. I'm not sure how much of an issue this is. At this point, I'm leaning towards 303 just to be semantically correct (use GET, don't re-POST) and just suck it up on the "temporary" part. But I'm not sure if 302 would be better since in practice it's been the same behavior as 303, but without requiring HTTP/1.1. But if I go down that line, I wonder if 301 is even better for the same reason plus the "permanent" part. Thoughts appreciated!

    Read the article

  • Best practice: How to use (repeat) CSS style attributes correctly?

    - by ellie
    Hi guys! As a CSS newbie I'm wondering if it's recommended by professionals to repeat specific style attributes and their not inherited but default properties for every relevant selector? For example, should I rather use body {background:transparent none no-repeat; border:0 none transparent; margin:0; padding:0;} img {background:transparent none no-repeat; border:0 none transparent; margin:0; outline:transparent none 0; padding:0;} div#someID {background:transparent none no-repeat; border:0 none; margin:0 auto; padding:0; text-align:left; width:720px; ...} or body {background:transparent; border:0; margin:0; padding:0;} img {background:transparent; border:0; margin:0; outline:0; padding:0;} div#someID {background:transparent; border:0; margin:0 auto; padding:0; text-align:left; width:720px; ...} or just what (I think) I really need body {background:transparent; margin:0; padding:0;} img {border:0; outline:0;} div#someID {margin:0 auto; width:720px; ...} If it's best practice to go with the first or second one what do you think about defining a class like .foo {background:transparent; border:0; margin:0; padding:0;} and then applying it to every relevant selector: <div id="someID" class="foo">...</div> Yep, now I'm totally confused... so please advise! Thanks!

    Read the article

  • What's the best way to do base36 arithmetic in perl?

    - by DVK
    What's the best way to do base36 arithmetic in Perl? To be more specific, I need to be able to do the following: Operate on positive N-digit numbers in base 36 (e.g. digits are 0-9 A-Z) N is finite, say 9 Provide basic arithmetic, at the very least the following 3: Addition (A+B) Subtraction (A-B) Whole division, e.g. floor(A/B). Strictly speaking, I don't really need a base10 conversion ability - the numbers will 100% of time be in base36. So I'm quite OK if the solution does NOT implement conversion from base36 back to base10 and vice versa. I don't much care whether the solution is brute-force "convert to base 10 and back" or converting to binary, or some more elegant approach "natively" performing baseN operations (as stated above, to/from base10 conversion is not a requirement). My only 3 considerations are: It fits the minimum specifications above It's "standard". Currently we're using and old homegrown module based on base10 conversion done by hand that is buggy and sucks. I'd much rather replace that with some commonly used CPAN solution instead of re-writing my own bicycle from scratch, but I'm perfectly capable of building it if no better standard possibility exists. It must be fast-ish (though not lightning fast). Something that takes 1 second to sum up 2 9-digit base36 numbers is worse than anything I can roll on my own :) P.S. Just to provide some context in case people decide to solve my XY problem for me in addition to answering the technical question above :) We have a fairly large tree (stored in DB as a bunch of edges), and we need to superimpose order on a subset of that tree. The tree dimentions are big both depth- and breadth- wise. The tree is VERY actively updated (inserts and deletes and branch moves). This is currently done by having a second table with 3 columns: parent_vertex, child_vertex, local_order, where local_order is an 9-character string built of A-Z0-9 (e.g. base 36 number). Additional considerations: It is required that the local order is unique per child (and obviously unique per parent), Any complete re-ordering of a parent is somewhat expensive, and thus the implementation is to try and assign - for a parent with X children - the orders which are somewhat evenly distributed between 0 and 36**10-1, so that almost no tree inserts result in a full re-ordering.

    Read the article

  • What is the best way to archive data in a relational database?

    - by GenericTypeTea
    I have a bit of an issue with a particular aspect of a program I'm working on. I need the ability to archive (fix) a table so that a change anywhere in the system will not affect the results it returns. This is the basic structure of what I need to fix: Recipe --> Recipe (as sub recipe) Recipe --> Ingredients So, if I fix a Recipe, I need to ensure all the sub recipes (including all the sub recipes sub recipes and so forth) are fixed and all its ingredients are fixed. The problem is that the sub recipe and ingredients still need to be modifiable as they are used by other recipes that are not fixed. I came up with a solution whereby I serialize (with protobuf-net) a master object that deals with the recipe and all the sub recipes and ingredients and save the archive data to a table like follows: Archive{ ReferenceId, (i.e. RecipeId) ReferenceTypeId, (i.e. Recipe) ArchiveData varbinary(max) } Now, this works great and is almost perfect... however I totally forgot (I'd love to blame the agile development mentally, however this was just short sighted) that this information needs to be reported on. As far as I'm aware I can't think how I could inflate the serialized data back into my Recipe Object and use it in a Report. I'm using the standard SQL 2005 report services at the moment. Alternatively, I guess I could do the following: Duplicate every table and tag the word "Archive" on the end of the table name. This would then give me an area of specific archive data... but ignoring my simplified example, there'd actually be about 15 tables duplicated. Add a nullable, non-foreign key property called "CopiedFromId" to every table that contains fixed data and duplicate every record that the recipe (and all it's sub recipes and all their sub recipes) touches. Create some sort of denormalised structure that could be restored from at a later date to the original, unfixed recipe. Although I think this would be like option 1 and involve a lot of extra tables. Anyway, I'm at a total loss and do not like any of the ideas particularly. Can anyone please advise the best course of action? EDIT: Or 4) Create tables specific to what the report requires and populate them with the data when the user clicks the report button? This would cause about 4 extra tables for the report in question.

    Read the article

  • How best to store Subversion version information in EAR's?

    - by Rene
    When receiving a bug report or an it-doesnt-work message one of my initials questions is always what version? With a different builds being at many stages of testing, planning and deploying this is often a non-trivial question. I the case of releasing Java JAR (ear, jar, rar, war) files I would like to be able to look in/at the JAR and switch to the same branch, version or tag that was the source of the released JAR. How can I best adjust the ant build process so that the version information in the svn checkout remains in the created build? I was thinking along the lines of: adding a VERSION file, but with what content? storing information in the META-INF file, but under what property with which content? copying sources into the result archive added svn:properties to all sources with keywords in places the compiler leaves them be I ended up using the svnversion approach (the accepted anwser), because it scans the entire subtree as opposed to svn info which just looks at the current file / directory. For this I defined the SVN task in the ant file to make it more portable. <taskdef name="svn" classname="org.tigris.subversion.svnant.SvnTask"> <classpath> <pathelement location="${dir.lib}/ant/svnant.jar"/> <pathelement location="${dir.lib}/ant/svnClientAdapter.jar"/> <pathelement location="${dir.lib}/ant/svnkit.jar"/> <pathelement location="${dir.lib}/ant/svnjavahl.jar"/> </classpath> </taskdef> Not all builds result in webservices. The ear file before deployment must remain the same name because of updating in the application server. Making the file executable is still an option, but until then I just include a version information file. <target name="version"> <svn><wcVersion path="${dir.source}"/></svn> <echo file="${dir.build}/VERSION">${revision.range}</echo> </target> Refs: svnrevision: http://svnbook.red-bean.com/en/1.1/re57.html svn info http://svnbook.red-bean.com/en/1.1/re13.html subclipse svn task: http://subclipse.tigris.org/svnant/svn.html svn client: http://svnkit.com/

    Read the article

  • What is the best way to read and write cXML documents in C# ?

    - by tetranz
    I know this is a vague open ended question. I'm hoping to get some general direction. I need to add cXML punchout to an ASP.NET C# site / application. This is replacing something that I wrote years ago in ColdFusion. I'm a reasonably experienced C# developer but I haven't done much with XML. There seems to be lots of different options for processing XML in .NET. Here's the open ended question: Assuming that I have an XML document in some form, eg a file or a string, what is the best way to read it into my code? I want to get the data and then query databases etc. The cXML document size and our traffic volumes are easily small enough so that loading the a cXML document into memory is not a problem. Should I: 1) Manually build classes based on the dtd and use the XML Serializer? 2) Use a tool to generate classes. There are sample cXML files downloadable from Ariba.com. I tried xsd.exe to generate an xsd and then xsd.exe /c to generate classes. When I try to deserialize I get errors because there seems to be "confusion" around whether some elements should be single values or arrays. I tried the CodeXS online tool but that gives errors in it's log and errors if I try to deserialize a sample document. 2) Create a dataset and ReadXml()? 3) Create a typed dataset and ReadXml()? 4) Use Linq to XML. I often use Linq to Objects so I'm familiar with Linq in general but I'm struggling to see what it gives me in this situation. 5) Some other means. I guess I need to improve my understanding of XML in general but even so ... am I missing some obvious way of doing this? In the old ColdFusion site I found a free component ("tag") which basically ignored any schema and read the XML into a "structure" which is essentially a series of nested hash tables which was then easy to read in code. That was probably quite sloppy but it worked. I also need to generate XML files from my C# objects. Maybe Linq to XML will be good for that. I could start with a default "template" document and manipulate it before saving. Thanks for any pointers ...

    Read the article

  • The best way to separate admin functionality from a public site?

    - by AndrewO
    I'm working on a site that's grown both in terms of user-base and functionality to the point where it's becoming evident that some of the admin tasks should be separate from the public website. I was wondering what the best way to do this would be. For example, the site has a large social component to it, and a public sales interface. But at the same time, there's back office tasks, bulk upload processing, dashboards (with long running queries), and customer relations tools in the admin section that I would like to not be effected by spikes in public traffic (or effect the public-facing response time). The site is running on a fairly standard Rails/MySQL/Linux stack, but I think this is more of an architecture problem than an implementation one: mainly, how does one keep the data and business logic in sync between these different applications? Some strategies that I'm evaluating: 1) Create a slave database of the public facing database on another machine. Extract out all of the model and library code so that it can be shared between the applications. Create new controllers and views for the admin interfaces. I have limited experience with replication and am not even sure that it's supposed to be used this way (most of the time I've seen it, it's been for scaling out the read capabilities of the same application, rather than having multiple different ones). I'm also worried about the potential for latency issues if the slave is not on the same network. 2) Create new more task/department-specific applications and use a message oriented middleware to integrate them. I read Enterprise Integration Patterns awhile back and they seemed to advocate this for distributed systems. (Alternatively, in some cases the basic Rails-style RESTful API functionality might suffice.) But, I have nightmares about data synchronization issues and the massive re-architecting that this would entail. 3) Some mixture of the two. For example, the only public information necessary for some of the back office tasks is a read-only completion time or status. Would it make sense to have that on a completely separate system and send the data to public? Meanwhile, the user/group admin functionality would be run on a separate system sharing the database? The downside is, this seems to keep many of the concerns I have with the first two, especially the re-architecting. I'm sure the answers are going to be highly dependent on a site's specific needs, but I'd love to hear success (or failure) stories.

    Read the article

  • Best way to test for a variable's existence in PHP; isset() is clearly broken

    - by chazomaticus
    From the isset() docs: isset() will return FALSE if testing a variable that has been set to NULL. Basically, isset() doesn't check for whether the variable is set at all, but whether it's set to anything but NULL. Given that, what's the best way to actually check for the existence of a variable? I tried something like: if(isset($v) || @is_null($v)) (the @ is necessary to avoid the warning when $v is not set) but is_null() has a similar problem to isset(): it returns TRUE on unset variables! It also appears that: @($v === NULL) works exactly like @is_null($v), so that's out, too. How are we supposed to reliably check for the existence of a variable in PHP? Edit: there is clearly a difference in PHP between variables that are not set, and variables that are set to NULL: <?php $a = array('b' => NULL); var_dump($a); PHP shows that $a['b'] exists, and has a NULL value. If you add: var_dump(isset($a['b'])); var_dump(isset($a['c'])); you can see the ambiguity I'm talking about with the isset() function. Here's the output of all three of these var_dump()s: array(1) { ["b"]=> NULL } bool(false) bool(false) Further edit: two things. One, a use case. An array being turned into the data of an SQL UPDATE statement, where the array's keys are the table's columns, and the array's values are the values to be applied to each column. Any of the table's columns can hold a NULL value, signified by passing a NULL value in the array. You need a way to differentiate between an array key not existing, and an array's value being set to NULL; that's the difference between not updating the column's value and updating the column's value to NULL. Second, Zoredache's answer, array_key_exists() works correctly, for my above use case and for any global variables: <?php $a = NULL; var_dump(array_key_exists('a', $GLOBALS)); var_dump(array_key_exists('b', $GLOBALS)); outputs: bool(true) bool(false) Since that properly handles just about everywhere I can see there being any ambiguity between variables that don't exist and variables that are set to NULL, I'm calling array_key_exists() the official easiest way in PHP to truly check for the existence of a variable. (Only other case I can think of is for class properties, for which there's property_exists(), which, according to its docs, works similarly to array_key_exists() in that it properly distinguishes between not being set and being set to NULL.)

    Read the article

  • Best way to run remote VBScript in ASP.net? WMI or PsExec?

    - by envinyater
    I am doing some research to find out the best and most efficient method for this. I will need to execute remote scripts on a number of Window Servers/Computers (while we are building them). I have a web application that is going to automate this task, I currently have my prototype working to use PsExec to execute remote scripts. This requires PsExec to be installed on the system. A colleague suggested I should use WMI for this. I did some research in WMI and I couldn't find what I'm looking for. I want to either upload the script to the server and execute it and read the results, or already have the script on the server and execute it and read the results. I would prefer the first option though! Which is more ideal, PsExec or WMI? For reference, this is my prototype PsExec code. This script is only executing a small script to get the Windows OS and Service Pack Info. Protected Sub windowsScript(ByVal COMPUTERNAME As String) ' Create an array to store VBScript results Dim winVariables(2) As String nameLabel.Text = Name.Text ' Use PsExec to execute remote scripts Dim Proc As New System.Diagnostics.Process ' Run PsExec locally Proc.StartInfo = New ProcessStartInfo("C:\Windows\psexec.exe") ' Pass in following arguments to PsExec Proc.StartInfo.Arguments = COMPUTERNAME & " -s cmd /C c:\systemInfo.vbs" Proc.StartInfo.RedirectStandardInput = True Proc.StartInfo.RedirectStandardOutput = True Proc.StartInfo.UseShellExecute = False Proc.Start() ' Pause for script to run System.Threading.Thread.Sleep(1500) Proc.Close() System.Threading.Thread.Sleep(2500) 'Allows the system a chance to finish with the process. Dim filePath As String = COMPUTERNAME & "\TTO\somefile.txt" 'Download file created by script on Remote system to local system My.Computer.Network.DownloadFile(filePath, "C:\somefile.txt") System.Threading.Thread.Sleep(1000) ' Pause so file gets downloaded ''Import data from text file into variables textRead("C:\somefile.txt", winVariables) WindowsOSLbl.Text = winVariables(0).ToString() SvcPckLbl.Text = winVariables(1).ToString() System.Threading.Thread.Sleep(1000) ' ''Delete the file on server - we don't need it anymore Dim Proc2 As New System.Diagnostics.Process Proc2.StartInfo = New ProcessStartInfo("C:\Windows\psexec.exe") Proc2.StartInfo.Arguments = COMPUTERNAME & " -s cmd /C del c:\somefile.txt" Proc2.StartInfo.RedirectStandardInput = True Proc2.StartInfo.RedirectStandardOutput = True Proc2.StartInfo.UseShellExecute = False Proc2.Start() System.Threading.Thread.Sleep(500) Proc2.Close() ' Delete file locally File.Delete("C:\somefile.txt") End Sub

    Read the article

  • What is the best way to return two values from a method?

    - by Edward Tanguay
    When I have to write methods which return two values, I usually go about it as in the following code which returns a List<string>. Or if I have to return e.g. a id and string, then I return a List<object> and then pick them out with index number and recast the values. This recasting and referencing by index seems inelegant so I want to develop a new habit for methods that return two values. What is the best pattern for this? using System; using System.Collections.Generic; using System.Linq; namespace MultipleReturns { class Program { static void Main(string[] args) { string extension = "txt"; { List<string> entries = GetIdCodeAndFileName("first.txt", extension); Console.WriteLine("{0}, {1}", entries[0], entries[1]); } { List<string> entries = GetIdCodeAndFileName("first", extension); Console.WriteLine("{0}, {1}", entries[0], entries[1]); } Console.ReadLine(); } /// <summary> /// gets "first.txt", "txt" and returns "first", "first.txt" /// gets "first", "txt" and returns "first", "first.txt" /// it is assumed that extensions will always match /// </summary> /// <param name="line"></param> public static List<string> GetIdCodeAndFileName(string line, string extension) { if (line.Contains(".")) { List<string> parts = line.BreakIntoParts("."); List<string> returnItems = new List<string>(); returnItems.Add(parts[0]); returnItems.Add(line); return returnItems; } else { List<string> returnItems = new List<string>(); returnItems.Add(line); returnItems.Add(line + "." + extension); return returnItems; } } } public static class StringHelpers { public static List<string> BreakIntoParts(this string line, string separator) { if (String.IsNullOrEmpty(line)) return null; else { return line.Split(new string[] { separator }, StringSplitOptions.None).Select(p => p.Trim()).ToList(); } } } }

    Read the article

  • Which is the "best" data access framework/approach for C# and .NET?

    - by Frans
    (EDIT: I made it a community wiki as it is more suited to a collaborative format.) There are a plethora of ways to access SQL Server and other databases from .NET. All have their pros and cons and it will never be a simple question of which is "best" - the answer will always be "it depends". However, I am looking for a comparison at a high level of the different approaches and frameworks in the context of different levels of systems. For example, I would imagine that for a quick-and-dirty Web 2.0 application the answer would be very different from an in-house Enterprise-level CRUD application. I am aware that there are numerous questions on Stack Overflow dealing with subsets of this question, but I think it would be useful to try to build a summary comparison. I will endeavour to update the question with corrections and clarifications as we go. So far, this is my understanding at a high level - but I am sure it is wrong... I am primarily focusing on the Microsoft approaches to keep this focused. ADO.NET Entity Framework Database agnostic Good because it allows swapping backends in and out Bad because it can hit performance and database vendors are not too happy about it Seems to be MS's preferred route for the future Complicated to learn (though, see 267357) It is accessed through LINQ to Entities so provides ORM, thus allowing abstraction in your code LINQ to SQL Uncertain future (see Is LINQ to SQL truly dead?) Easy to learn (?) Only works with MS SQL Server See also Pros and cons of LINQ "Standard" ADO.NET No ORM No abstraction so you are back to "roll your own" and play with dynamically generated SQL Direct access, allows potentially better performance This ties in to the age-old debate of whether to focus on objects or relational data, to which the answer of course is "it depends on where the bulk of the work is" and since that is an unanswerable question hopefully we don't have to go in to that too much. IMHO, if your application is primarily manipulating large amounts of data, it does not make sense to abstract it too much into objects in the front-end code, you are better off using stored procedures and dynamic SQL to do as much of the work as possible on the back-end. Whereas, if you primarily have user interaction which causes database interaction at the level of tens or hundreds of rows then ORM makes complete sense. So, I guess my argument for good old-fashioned ADO.NET would be in the case where you manipulate and modify large datasets, in which case you will benefit from the direct access to the backend. Another case, of course, is where you have to access a legacy database that is already guarded by stored procedures. ASP.NET Data Source Controls Are these something altogether different or just a layer over standard ADO.NET? - Would you really use these if you had a DAL or if you implemented LINQ or Entities? NHibernate Seems to be a very powerful and powerful ORM? Open source Some other relevant links; NHibernate or LINQ to SQL Entity Framework vs LINQ to SQL

    Read the article

  • What's the best way to handle modules that use each other?

    - by Axeman
    What's the best way to handle modules that use each other? Let's say I have a module which has functions for hashes: # Really::Useful::Functions::On::Hash.pm use base qw<Exporter>; use strict; use warnings; use Really::Useful::Functions::On::List qw<transform_list>; our @EXPORT_OK = qw<transform_hash transform_hash_as_list ...>; #... sub transform_hash { ... } #... sub transform_hash_as_list { return transform_list( %{ shift() } ); } #... 1 And another module has been segmented out for lists: # Really::Useful::Functions::On::List.pm use base qw<Exporter>; use strict; use warnings; use Really::Useful::Functions::On::Hash qw<transform_hash>; our @EXPORT_OK = qw<transform_list some_func ...>; #... sub transform_list { ... } #... sub some_func { my %params = transform_hash @_; #... } #... 1 Suppose that enough of these utility functions are handy enough that I'll want to use them in BEGIN statements and import functions to process parameter lists or configuration data. I have been putting sub definitions into BEGIN blocks to make sure they are ready to use whenever somebody includes the module. But I have gotten into hairy race conditions where a definition is not completed in a BEGIN block. I put evolving code idioms into modules so that I can reuse any idiom I find myself coding over and over again. For instance: sub list_if { my $condition = shift; return unless $condition; my $more_args = scalar @_; my $arg_list = @_ > 1 ? \@_ : @_ ? shift : $condition; if (( reftype( $arg_list ) || '' ) eq 'ARRAY' ) { return wantarray ? @$arg_list : $arg_list; } elsif ( $more_args ) { return $arg_list; } return; } captures two idioms that I'm kind of tired of typing: @{ func_I_hope_returns_a_listref() || [] } and ( $condition ? LIST : ()) The more I define functions in BEGIN blocks, the more likely I'll use these idiom bricks to express the logic the more likely that bricks are needed in BEGIN blocks. Do people have standard ways of dealing with this sort of language-idiom-brick model? I've been doing mostly Pure-Perl; will XS alleviate some of this?

    Read the article

  • SQLAuthority News – Job Interviewing the Right Way (and for the Right Reasons) – Guest Post by Feodor Georgiev

    - by pinaldave
    Feodor Georgiev is a SQL Server database specialist with extensive experience of thinking both within and outside the box. He has wide experience of different systems and solutions in the fields of architecture, scalability, performance, etc. Feodor has experience with SQL Server 2000 and later versions, and is certified in SQL Server 2008. Feodor has written excellent article on Job Interviewing the Right Way. Here is his article in his own language. A while back I was thinking to start a blog post series on interviewing and employing IT personnel. At that time I had just read the ‘Smart and gets things done’ book (http://www.joelonsoftware.com/items/2007/06/05.html) and I was hyped up on some debatable topics regarding finding and employing the best people in the branch. I have no problem with hiring the best of the best; it’s just the definition of ‘the best of the best’ that makes things a bit more complicated. One of the fundamental books one can read on the topic of interviewing is the one mentioned above. If you have not read it, then you must do so; not because it contains the ultimate truth, and not because it gives the answers to most questions on the subject, but because the book contains an extensive set of questions about interviewing and employing people. Of course, a big part of these questions have different answers, depending on location, culture, available funds and so on. (What works in the US may not necessarily work in the Nordic countries or India, or it may work in a different way). The only thing that is valid regardless of any external factor is this: curiosity. In my belief there are two kinds of people – curious and not-so-curious; regardless of profession. Think about it – professional success is directly proportional to the individual’s curiosity + time of active experience in the field. (I say ‘active experience’ because vacations and any distractions do not count as experience :)  ) So, curiosity is the factor which will distinguish a good employee from the not-so-good one. But let’s shift our attention to something else for now: a few tips and tricks for successful interviews. Tip and trick #1: get your priorities straight. Your status usually dictates your priorities; for example, if the person looking for a job has just relocated to a new country, they might tend to ignore some of their priorities and overload others. In other words, setting priorities straight means to define the personal criteria by which the interview process is lead. For example, similar to the following questions can help define the criteria for someone looking for a job: How badly do I need a (any) job? Is it more important to work in a clean and quiet environment or is it important to get paid well (or both, if possible)? And so on… Furthermore, before going to the interview, the candidate should have a list of priorities, sorted by the most importance: e.g. I want a quiet environment, x amount of money, great helping boss, a desk next to a window and so on. Also it is a good idea to be prepared and know which factors can be compromised and to what extent. Tip and trick #2: the interview is a two-way street. A job candidate should not forget that the interview process is not a one-way street. What I mean by this is that while the employer is interviewing the potential candidate, the job seeker should not miss the chance to interview the employer. Usually, the employer and the candidate will meet for an interview and talk about a variety of topics. In a quality interview the candidate will be presented to key members of the team and will have the opportunity to ask them questions. By asking the right questions both parties will define their opinion about each other. For example, if the candidate talks to one of the potential bosses during the interview process and they notice that the potential manager has a hard time formulating a question, then it is up to the candidate to decide whether working with such person is a red flag for them. There are as many interview processes out there as there are companies and each one is different. Some bigger companies and corporates can afford pre-selection processes, 3 or even 4 stages of interviews, small companies usually settle with one interview. Some companies even give cognitive tests on the interview. Why not? In his book Joel suggests that a good candidate should be pampered and spoiled beyond belief with a week-long vacation in New York, fancy hotels, food and who knows what. For all I can imagine, an interview might even take place at the top of the Eifel tower (right, Mr. Joel, right?) I doubt, however, that this is the optimal way to capture the attention of a good employee. The ‘curiosity’ topic What I have learned so far in my professional experience is that opinions can be subjective. Plus, opinions on technology subjects can also be subjective. According to Joel, only hiring the best of the best is worth it. If you ask me, there is no such thing as best of the best, simply because human nature (well, aside from some physical limitations, like putting your pants on through your head :) ) has no boundaries. And why would it have boundaries? I have seen many curious and interesting people, naturally good at technology, though uninterested in it as one  can possibly be; I have also seen plenty of people interested in technology, who (in an ideal world) should have stayed far from it. At any rate, all of this sums up at the end to the ‘supply and demand’ factor. The interview process big-bang boils down to this: If there is a mutual benefit for both the employer and the potential employee to work together, then it all sorts out nicely. If there is no benefit, then it is much harder to get to a common place. Tip and trick #3: word-of-mouth is worth a thousand words Here I would just mention that the best thing a job candidate can get during the interview process is access to future team members or other employees of the new company. Nowadays the world has become quite small and everyone knows everyone. Look at LinkedIn, look at other professional networks and you will realize how small the world really is. Knowing people is a good way to become more approachable and to approach them. Tip and trick #4: Be confident. It is true that for some people confidence is as natural as breathing and others have to work hard to express it. Confidence is, however, a key factor in convincing the other side (potential employer or employee) that there is a great chance for success by working together. But it cannot get you very far if it’s not backed up by talent, curiosity and knowledge. Tip and trick #5: The right reasons What really bothers me in Sweden (and I am sure that there are similar situations in other countries) is that there is a tendency to fill quotas and to filter out candidates by criteria different from their skill and knowledge. In job ads I see quite often the phrases ‘positive thinker’, ‘team player’ and many similar hints about personality features. So my guess here is that discrimination has evolved to a new level. Let me clear up the definition of discrimination: ‘unfair treatment of a person or group on the basis of prejudice’. And prejudice is the ‘partiality that prevents objective consideration of an issue or situation’. In other words, there is not much difference whether a job candidate is filtered out by race, gender or by personality features – it is all a bad habit. And in reality, there is no proven correlation between the technology knowledge paired with skills and the personal features (gender, race, age, optimism). It is true that a significantly greater number of Darwin awards were given to men than to women, but I am sure that somewhere there is a paper or theory explaining the genetics behind this. J This topic actually brings to mind one of my favorite work related stories. A while back I was working for a big company with many teams involved in their processes. One of the teams was occupying 2 rooms – one had the team members and was full of light, colorful posters, chit-chats and giggles, whereas the other room was dark, lighted only by a single monitor with a quiet person in front of it. Later on I realized that the ‘dark room’ person was the guru and the ultimate problem-solving-brain who did not like the chats and giggles and hence was in a separate room. In reality, all severe problems which the chatty and cheerful team members could not solve and all emergencies were directed to ‘the dark room’. And thus all worked out well. The moral of the story: Personality has nothing to do with technology knowledge and skills. End of story. Summary: I’d like to stress the fact that there is no ultimately perfect candidate for a job, and there is no such thing as ‘best-of-the-best’. From my personal experience, the main criteria by which I measure people (co-workers and bosses) is the curiosity factor; I know from experience that the more curious and inventive a person is, the better chances there are for great achievements in their field. Related stories: (for extra credit) 1) Get your priorities straight. A while back as a consultant I was working for a few days at a time at different offices and for different clients, and so I was able to compare and analyze the work environments. There were two different places which I compared and recently I asked a friend of mine the following question: “Which one would you prefer as a work environment: a noisy office full of people, or a quiet office full of faulty smells because the office is rarely cleaned?” My friend was puzzled for a while, thought about it and said: “Hmm, you are talking about two different kinds of pollution… I will probably choose the second, since I can clean the workplace myself a bit…” 2) The interview is a two-way street. One time, during a job interview, I met a potential boss that had a hard time phrasing a question. At that particular time it was clear to me that I would not have liked to work under this person. According to my work religion, the properly asked question contains at least half of the answer. And if I work with someone who cannot ask a question… then I’d be doing double or triple work. At another interview, after the technical part with the team leader of the department, I was introduced to one of the team members and we were left alone for 5 minutes. I immediately jumped on the occasion and asked the blunt question: ‘What have you learned here for the past year and how do you like your job?’ The team member looked at me and said ‘Nothing really. I like playing with my cats at home, so I am out of here at 5pm and I don’t have time for much.’ I was disappointed at the time and I did not take the job offer. I wasn’t that shocked a few months later when the company went bankrupt. 3) The right reasons to take a job: personality check. A while back I was asked to serve as a job reference for a coworker. I agreed, and after some weeks I got a phone call from the company where my colleague was applying for a job. The conversation started with the manager’s question about my colleague’s personality and about their social skills. (You can probably guess what my internal reaction was… J ) So, after 30 minutes of pouring common sense into the interviewer’s head, we finally agreed on the fact that a shy or quiet personality has nothing to do with work skills and knowledge. Some years down the road my former colleague is taking the manager’s position as the manager is demoted to a different department. Reference: Feodor Georgiev, Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >