Search Results

Search found 9062 results on 363 pages for 'big empin'.

Page 236/363 | < Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >

  • What's the deal with char.GetNumericValue?

    - by mgroves
    I was working on Project Euler 40, and was a bit bothered that there was no int.Parse(char). Not a big deal, but I did some asking around and someone suggested char.GetNumericValue. GetNumericValue seems like a very odd method to me: Takes in a char as a parameter and returns...a double? Returns -1.0 if the char is not '0' through '9' So what's the reasoning behind this method, and what purpose does returning a double serve? I even fired up Reflector and looked at InternalGetNumericValue, but it's just like watching Lost: every answer just leads to another question.

    Read the article

  • How I can move table to another filegroup ?

    - by denisioru
    Hello, I have MSSQL 2008 Ent and OLTP database with two big tables. How I can move this tables to another filegroup without service interrupting? Now, about 100-130 records inserted and 30-50 records updated each second in this tables. Each table have about 100M records and six fields (including one field geography). I looking for solution via google, but all solutions contain "create second table, insert rows from first table, drop first table, bla bla bla". Can I use partitioning functions for solving this problem? Thank you.

    Read the article

  • Python check if object is in list of objects

    - by John
    Hi, I have a list of objects in Python. I then have another list of objects. I want to go through the first list and see if any items appear in the second list. I thought I could simply do for item1 in list1: for item2 in list2: if item1 == item2: print "item %s in both lists" However this does not seem to work. Although if I do: if item1.title == item2.title: it works okay. I have more attributes than this though so don't really want to do 1 big if statement comparing all the attributes if I don't have to. Can anyone give me help or advise on what I can do to find the objects which appear in both lists. Thanks

    Read the article

  • SQL Selecting from one table OR another then joining the two

    - by Cyprus106
    So this is interesting, and apparently beyond my SQL skillset. I need to select a particular record where an ID="0003" (or whatever) from either table1 or table2 if table1 doesn't have that record. Then I need to join table1 and table2 on a mutual field they both have (field name is Product_ID) I was playing with all sorts of variations of the following, (no, it doesn't work) but after 2 days of groping through the internet and a big SQL book I still can't figure anything out. SELECT ProductStock.Product_ID AS PSID, Products.ID AS PID, ProductStock.*, Products.* FROM ProductStock, Products LEFT JOIN (Products AS Pr) ON Pr.ID=ProductStock.Product_ID WHERE (ProductStock.ID="6003" OR Products.ID="6003")

    Read the article

  • Clustered index on frequently changing reference table of one or more foreign keys

    - by Ian
    My specific concern is related to the performance of a clustered index on a reference table that has many rapid inserts and deletes. Table 1 "Collection" collection_pk int (among other fields) Table 2 "Item" item_pk int (among other fields) Reference Table "Collection_Items" collection_pk int, item_pk int (combined primary key) Because the primary key is composed of both pks, a clustered index is created and the data physically ordered in the table according to the combined keys. I have many users creating and deleting collections and adding and removing items to those collections very frequently affecting the "Collection_Items" table, and its clustered index. QUESTION PART: Since the "Collection_Items" table is so dynamic, wouldn't there be a big performance hit on constantly resorting the table rows because of the clustered index ? If yes, what should I do to minimize this ?

    Read the article

  • Partition a rectangle into near-squares of given areas

    - by Marko Dumic
    I have a set of N positive numbers, and a rectangle of dimensions X and Y that I need to partition it in N smaller rectangles such that: the surface area of each smaller rectangle is proportional to it's corresponding number in given set all space of big rectangle is occupied and there is no leftover space between smaller rectangles each small rectangle should be shaped as close to square as feasible the execution time should be reasonably small I need directions on this. Do you know of such algorithm described on the web? Do you have any ideas (pseudo-code is fine)? Thanks.

    Read the article

  • Why might someone say R is *NOT* a programming language? [closed]

    - by Tal Galili
    I came by the following comment today on twitter "R is not a programming language, it's a statistics package with the GUI missing." And I am wondering - Why not? What is "missing" in R to make it a "programming language" ? Update: For the protocol, I am a big fan of R, use it daily, and support it's existence. I now changed the name of this thread from "Why is R NOT a programming language?" to "Why might someone say R is NOT a programming language?" Which better reflects my motivation for this thread (which is, to know if R has any programmatical disadvantages that I might have not heard about).

    Read the article

  • Is it possible with dynamic TSQL query ?

    - by eugeneK
    I have very long select query which i need to filter based on some params, i'm trying to avoid having different stored procedures or if statements inside of single stored procedure by using partly dynamic TSQL... I will avoid long select just for example sake select a from b where c=@c or d=@d @c and @d are filter params, only one can filter at the same time but also both filters could be disabled. 0 for each of these means param is disables so i can create nvarchar with where statement in it... How do i integrate in here dynamic query so 'where' can be added to normal query. I cannot add all the query as big nvarchar because there is too many things in it which will require changes ( ie. when's, subqueries, joins)

    Read the article

  • Is SHA sufficient for checking file duplication? (sha1_file in PHP)

    - by wag2639
    Suppose you wanted to make a file hosting site for people to upload their files and send a link to their friends to retrieve it later and you want to insure files are duplicated where we store them, is PHP's sha1_file good enough for the task? Is there any reason to not use md5_file instead? For the frontend, it'll be obscured using the original file name store in a database but some additional concerns would be if this would reveal anything about the original poster. Does a file inherit any meta information with it like last modified or who posted it or is this stuff based in the file system? Also, is using a salt frivolous since security in regards of rainbow table attack mean nothing to this and the hash could later be used as a checksum? One last thing, scalability? initially, it's only going to be used for small files a couple of megs big but eventually... Edit 1: The point of the hash is primarily to avoid file duplication, not to create obscurity.

    Read the article

  • Why is there a margin at the top of my browser?

    - by fmz
    I have a web page that is displays differently in Firefox and Safari (IE testing yet to come). The page displays as expected in Safari, but there is a 50px margin between the body and the HTML that I can't determine what is causing it. Here is the CSS for the body: body { font-size: 13px; line-height: 1.333em; background: #f6eaae url(../_images/parchment-big.jpg) no-repeat center top; font-family: "Lucida Grande", Lucida, Verdana, sans-serif; color: #323232; } I would really appreciate some assistance in finding what is causing this difference. Ideally the Firefox version is better because it gives that extra breathing room at the top. Thanks.

    Read the article

  • Data format for content heavy iPhone app - Plist or XML?

    - by Toby
    Hello, I'm building an iPhone app that is essentially a book, it will be bundled with a lot of text-heavy content. I considered bundling the data as XML and load it when the application starts but the XML would contain a lot of nested structures and be a bit of a pain to parse. Would it be better to use a plist? I'm concerned about memory usage and plists are loaded entirely into memory - can they be parsed in chunks? Is there a maximum size to a plist and how efficient are they? I'm not sure how big the bundled content is going to be yet but I should imagine it could be anywhere from 500k to 4MB. Thanks in advance.

    Read the article

  • Is my Perl script grabbing environment variabless from "someplace else"?

    - by Michael Wilson
    On a Solaris box in a "mysterious production system" I'm running a Perl script that references an environment variable. No big deal. The contents of that variable from the shell both pre- and post-execution are what I expect. However, when reported by the script, it appears as though it's running in some other sub-shell which is clobbering my vars with different values for the duration of the script. Unfortunately I really can't paste the code. I'm trying to get an atomic case, but I'm at my wit's end here.

    Read the article

  • move data from one table to another, postgresql edition

    - by IggShaman
    Hi All, I'd like to move some data from one table to another (with a possibly different schema). Straightforward solution that comes into mind is - start a transaction with serializable isolation level; INSERT INTO dest_table SELECT data FROM orig_table,other-tables WHERE <condition>; DELETE FROM orig_table USING other-tables WHERE <condition>; COMMIT; Now what if the amount of data is rather big, and the <condition> is expensive to compute? In PostgreSQL, a RULE or a stored procedure can be used to delete data on the fly, evaluating condition only once. Which solution is better? Are there other options?

    Read the article

  • Is this SQL select code following good practice?

    - by acidzombie24
    I am using sqlite and will port to mysql (5) later. I wanted to know if I am doing something I shouldnt be doing. I tried purposely to design so I'll compare to 0 instead of 1 (I changed hasApproved to NotApproved to do this, not a big deal and I haven't written any code). I was told I never need to write a subquery but I do here. My Votes table is just id, ip, postid (I don't think I can write that subquery as a join instead?) and that's pretty much all that is on my mind. Naming conventions I don't really care about since the tables are created via reflection and is all over the place. select id, name, body, upvotes, downvotes, (select 1 from UpVotes where IPAddr=? AND post=Post.id) as myup, (select 1 from DownVotes where IPAddr=@0 AND post=Post.id) as mydown from Post where flag = '0' limit ?, ?"

    Read the article

  • JQUERY, getting two BINDs/Clicks to play nice together?

    - by nobosh
    I have the following line of code: <li id="1" class=" "> <a href="">Parking Lot</a> <span id="1" class="list-edit">edit</span> </li> I then have two binds: $("#lists li").click(function(){....... $(".list-edit").click(function(){......... The problem I'm having is I need the LI to contain the EDIT span because of CSS styling reasons, I have a big blue background. But this is preventing me from binding the EDIT btn. Is there a way to get these two to play nice? Thxs

    Read the article

  • Python preprocessing imports

    - by FiloSottile
    I am managing a quite large python code base (2000 lines) that I want anyway to be available as a single runnable python script. So I am searching for a method or a tool to merge a development folder, made of different python files into a single running script. The thing/method I am searching for should take code split into different files, maybe with a starting __init___.py file that contains the imports and merge it into a single, big script. Much like a preprocessor. Best if a near-native way, better if I can anyway run from the dev folder. I have already checked out pypp and pypreprocessor but they don't seem to take the point. Something like a strange use of __import__() or maybe a bunch of from foo import * replaced by the preprocessor with the code? Obviously I only want to merge my directory and not common libraries.

    Read the article

  • Which file types are worth compressing (zipping) for remote storage? For which of them the compresse

    - by user193655
    I am storing documents in sql server in varbinary(max) fileds, I use filestream optionally when a user has: (DB_Size + Docs_Size) ~> 0.8 * ExpressEdition_Max_DB_Size I am currently zipping all the files, anyway this is done because the Document Read/Write work was developed 10 years ago where Storage was more expensive than now. Many files when zipped are almost as big as the original (a zipped pdf is about 95% of original size). And anyway unzipping has some overhead, that becomes twice when I need also to "Check-in"/Update the file because I need to zip it. So I was thinking of giving to the users the option to choose whether the file type will be zipped or not by providing some meaningful default values. For my experience I would impose the following rules: 1) zip by default: txt, bmp, rtf 2) do not zip by default: jpg, jpeg, Microsoft Office files, Open Office files, png, tif, tiff Could you suggest other file types chosen among the most common or comment on the ones I listed here?

    Read the article

  • Why are most really fast servers written in C instead of C++?

    - by orokusaki
    I'm trying to decide which to learn and I've read all the "Which is better" questions/arguments, so I thought I'd get your take on something more specific. Is there a platform dependency issue that C++ developers run into with such applications? Or, is it because there are more C developers out there than C++? I also noticed that many more third party C modules exist for Python even thought C++ modules are supported. From what I've read on different threads the consensus is that C++ is easier and faster to write, and runs just as fast. Am I missing something really big. Examples: NGINX APE (comet server) Apache

    Read the article

  • Javascript large number array compression

    - by gatapia
    Hi All, I've got a javascript application that sends a large amount of numerical data down the wire. This data is then stored in a database. I am having size issues (too much bandwidth, database getting too big). I am now ready to sacrifice some performance for compression. I was thinking of implementing a base 62 number.toString(62) and parseInt(compressed, 62). This would certainly reduce the size of the data but before I go ahead and do this I thought I would put it to the folks here as I know there must be some outside the box solution I have not considered. The basic specs are: - Compress large number arrays into strings for JSONP transfer (So I think UTF is out) - Be relatively fast, look I'm not expecting same performance as I have now but I also don't want gzip compression either. Any ideas would be greatly appreciated. Thanks Guido Tapia

    Read the article

  • Problems with dynamic programming

    - by xan
    I've got difficulties with understanding dynamic programming, so I decided to solve some problems. I know basic dynamic algorithms like longest common subsequence, knapsack problem, but I know them because I read them, but I can't come up with something on my own :-( For example we have subsequence of natural numbers. Every number we can take with plus or minus. At the end we take absolute value of this sum. For every subsequence find the lowest possible result. in1: 10 3 5 4; out1: 2 in2: 4 11 5 5 5; out2: 0 in3: 10 50 60 65 90 100; out3: 5 explanation for 3rd: 5 = |10+50+60+65-90-100| what it worse my friend told me that it is simple knapsack problem, but I can't see any knapsack here. Is dynamic programming something difficult or only I have big problems with it?

    Read the article

  • Obtain all keys of a Neo4j index

    - by MattiSG
    I have a Neo4j database whose content is generated dynamically from a big dataset. All “entry points” nodes are indexed on a named index (IndexManager.forNodes(…)). I can therefore look up a particular “entry point” node. However, I would now like to enumerate all those specific nodes, but I can't know on which key they were indexed. Is there any way to enumerate all keys of a Neo4j Index? If not, what would be the best way to store those keys, a data type that is eminently non-graph-oriented? UPDATE (thanks for asking details :) ): the list would be more than 2 million entries. The main use case would be to never update it after an initialization step, but other use cases might need it, so it has to be somewhat scalable. Also, I would really prefer avoiding killing my current resilience abilities, so storing all keys at once, as opposed to adding them incrementally, would be a last-resort solution.

    Read the article

  • How do i hide html until its processed with javascript?

    - by acidzombie24
    I am using some JS code to transform my menu into a drilldown menu. The problem is before it runs the JS you see a BIG UGLY mess of links. On their site its solved by putting the js at the top. Using recommendations by yahoo/YSlow i am keeping the JS files at the bottom. I tried hiding the menu with display:none then using jquery to .show(), .css('display', ''), .css('display', 'block') and they all lead up to a messsed up looking menu (i get the title but not the title background color or any links of the menu) How do i properly hide a div/menu and show it after being rendered?

    Read the article

  • FileInputStream and FileOutputStream to the same file: Is a read() guaranteed to see all write()s that "happened before"?

    - by user946850
    I am using a file as a cache for big data. One thread writes to it sequentially, another thread reads it sequentially. Can I be sure that all data that has been written (by write()) in one thread can be read() from another thread, assuming a proper "happens-before" relationship in terms of the Java memory model? Is this behavior documented? EDIT: In my JDK, FileOutputSream does not override flush(), and OutputStream.flush() is empty. That's why I'm wondering... EDIT^2: The streams in question are owned exclusively by a class that I have full control of. Each stream is guaranteed to be accesses by one thread only. My tests show that it works as expected, but I'm still wondering if this is guaranteed and documented. See also this related discussion: http://chat.stackoverflow.com/rooms/17598/discussion-between-hussain-al-mutawa-and-user946850

    Read the article

  • Postpone email when domains equal

    - by michalzuber
    HI. For a client i'm developing 4 different email agents for his web portal. I need to send a lot of emails to clients (couple of thousands in the future) which are stored in a database. Sending is fine, but I would like to work out a PHP script which sends emails, but also stores the previous email domain and if they equal than postpone that email for sending later to prevent spam filters. I'm going to load that script with cron and I already set_time_limit(0); Big thanks for replies with ideas ;)

    Read the article

  • VBA for filtering columns

    - by Ampi Severe
    I have a big database-like sheet, first row contains headers. I would like a subset of rows of this table based on column values. Two issues: 1) VBA-wise I would like to loop through the columns, when the values for all necessary columns all match, copy the entire row into a new sheet. 2) The subset of rows is based on a list. This should be the first column to be looped through. For example I want all rows where the value in column A is equal to one of the values in my list. Is there any possibility to autofilter strings based on a list (column) of strings? EDIT Thanks to @Doug Glancy the autofiltering works now, so I've removed my (horrible) code and issue 1 is solved.

    Read the article

< Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >