Search Results

Search found 10417 results on 417 pages for 'large'.

Page 275/417 | < Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >

  • I need help understanding what Exercise 5-12 is asking for in the C Programming Language book.

    - by marsol0x
    K&R C Programming Language: pg. 105 Extend entab and detab to accept the shorthand entab -m +n to mean tab stops every n columns, starting at column m. entab replaces a number of spaces with a tab character and detab does the opposite. The question I have concerns the tab stops and entab. I figure that for detab it's pretty easy to determine the number of spaces needed to reach the next tab stop, so no worries there. With entab, replacing spaces with tabs is slightly more difficult since I cannot for sure know how large the tab character goes to its own tab stop (unless there is a way to know for sure). Am I even thinking about this thing properly?

    Read the article

  • jQuery, Forms, Browser Refreshes

    - by Eric Cope
    I have a large form with some fields values dependent on previous elements. I use jquery's .trigger event to trigger the dependent field's update functions. When I refresh the page (click reload or click back), the previous values selected are still there, but the dependent fields are not reflecting the other element's values. How can I trigger the update functions upon refresh? I saw a way to prevent the browser from using the form's cached values. I'd rather use the cached values and update the elements dependent on the elements with cached values.

    Read the article

  • Write to pipe deadlocking program

    - by avs3323
    Hi, I am having a problem in my program that uses pipes. What I am doing is using pipes along with fork/exec to send data to another process What I have is something like this: //pipes are created up here if(fork() == 0) //child process { ... execlp(...); } else { ... fprintf(stderr, "Writing to pipe now\n"); write(pipe, buffer, BUFFER_SIZE); fprintf(stderr, "Wrote to pipe!"); ... } This works fine for most messages, but when the message is very large, the write into the pipe deadlocks. I think the pipe might be full, but I do not know how to clear it. I tried using fsync but that didn't work. Can anyone help me?

    Read the article

  • How to create wordpress-like option table and get values for each row?

    - by Nacho
    Hi guys. I'm looking to create an options table in my db that makes every record a system option, so I can work with a little number of fields. My db has the following structure: 3 columns named id, name, and value The following data is inserted as an example: +--+-----------+--------------------------+ |id|name |value | +--+-----------+--------------------------+ | 1|uri |www.example.com | | 2|sitename |Working it out | | 3|base_folder|/folder1/folder2/ | | 4|slogan |Just a slogan for the site| +--+-----------+--------------------------+ That way I can include a large number of customizable system options very easily. The problem is that I don't know how to retrieve them. How do I get the value of uri and store it as a var? And better yet, how do I get, for exmaple, values of id 1 and 4 only without making a query each time? (I assume multiple queries are useless and a pretty ugly method.) I know the question is pretty basic but I'm lost here. I'd really appreciate your answer!

    Read the article

  • Sorting SQL query results in Java

    - by Manu
    Firstly my apologies for putting some large text here. Below is the result by executing SQL query in Java but I want it to short it out using timestamp. For eg. TimeStamp in the below 1st line text is - 040501(HH:mm:ss) (like wise all data contains timestamp) (Sorting should be done by using timestamp parameters) (Each line given below is single row from database) 12010051104050131331GZM4 7000000 1 FCFR 120100511040501912828MP2 11590000 0 NOTY 120100511040501312938VF7 366140 .96808795 FGPC 120100511040501912828KA7 6580000 0 NOTY 120100511040501912828JH4 490000 0 NOTY 120100511160528912810PV4 83227500 1.03581 TRIB 120100511160538912795W31 0 1 BILL 120100511160540912828MP2 455784400 0 NOTY 120100511160545912795W31 0 1 BILL 220100511 040501 2101000 220100511 040501 51037707 220100511 040502 700149 220100511 040502 4289000 220100511 060514 71616600 220100511 060514 722453500 the result i would expect is... 12010051104050131331GZM4 7000000 1 FCFR 120100511040501912828MP2 11590000 0 NOTY 120100511040501312938VF7 366140 .96808795 FGPC 120100511040501912828MP2 11590000 0 NOTY 120100511040501912828JH4 490000 0 NOTY 20100511040501 2101000 20100511040501 51037707 20100511040502 4289000 20100511040502 700149 20100511060514 722453500 20100511060514 71616600 20100511160528912810PV4 83227500 1.03581 TRIB 20100511160538912795W31 0 1 BILL 20100511160540912828MP2 455784400 0 NOTY 20100511160545912795W31 0 1 BILL Please help me out guys. i am fighting for this very long time. Thanks for your help in advance.

    Read the article

  • Select calls seems to not time out.

    - by martsbradley
    HI Folks, I have a threaded C++ program where up to three threads are calling select on a three separate socket descriptors waiting for data to become available. Each thread handles one socket and adds it to the readfds with a timeout of 300 seconds. After select returns if there is data available I'm calling recv to read it. Is there anything that I need to be aware of with winsock and threads because for some reason after a number of hours the select calls all seem to be not timing out. Can a multi threaded program select from a number of threads without issue? I know that I should have one thread listening to all three sockets however that would be a large change for this app and I'm only looking to apply a bug fix. cheers, Martin.

    Read the article

  • Entity Framework projections

    - by David McClelland
    We are investigating the Entity Framework to see if it will meet our particular needs. Here is the scenario I am interested in: I have a large table (let's call it VeryWideRecord) which has many columns, and it has a corresponding business object (it's also called VeryWideRecord). I would like to be able to query my database for a VeryWideRecord business object, but only have values for certain columns returned by the underlying SQL. Can I do this with the Entity Framework? I am uncertain as to whether this could be done with the Entity Framework's table splitting feature, because the application needs to be able (at runtime) to change the columns that are requested. The reason for this is that we are trying to minimize the amount of information that is going across the wire. I see how this could be done using NHibernate (example), but how can I do this with the Entity Framework?

    Read the article

  • How can I implement lazy loading on a page with 500+ images?

    - by Fedor
    I basically have a booking engine unit results page which must show 40 units and per each unit there's 1 large image of the first thumbnail and an X number of accompanying thumbnail images. I've been using the jquery lazy load plugin, but it's not thorough enough ( I'm invoking it on DOM Ready ), plus it doesn't really work in IE ( 50% of the clients use IE so it's a big issue ). What I think I really need to do is not really spit out the image but a fake element such as a span, and possibly modify my code such that if the user views the span, render it into an image element. <span src="/images/foo.gif"> The booking engine relies on JS so I think I might be forced to just rely on ajaxifying all the thumbnails and have event handlers on window scroll, etc in order for the page to be "usable" and load at an average time ( 2-3 seconds instead of 5-30s on high speed DSL/Cable ). I'd appreciate any examples or ideas.

    Read the article

  • rsync useful w/ encrypted files?

    - by barrycarter
    Is rsync efficient for transferring encrypted files? More specifically: I encrypt 'x' with my public key and call the result 'y'. I rsync 'y' to my backup server. 'x' changes slightly I encrypt the modified 'x' and rsync the modified 'y' to my backup server. Is this efficient? I know a small change in 'x' yields a large change in 'y', but is the change localized? Or has 'y' changed so thoroughly that rsync is not much better than scp? I currently backup my "critical" files by tarring/bzipping them nightly, then encrypting the .tar.bz file and rsync'ing it to my backup server. Many of the individual files don't change, but, of course, the tar file changes if even one of the files change. Is this efficient? Should I be encrypting and backing up each file individually? That way, unchanged files will take no time to rsync.

    Read the article

  • Universal syntax file format?

    - by Isaiah
    Hey as a project to improve my programing skills I've begun programing a nice code editor in python to teach myself project management, version control, and gui programming. I was wanting to utilize syntax files made for other programs so I could have a large collection already. I was wondering if there was any kind of universal syntax file format much in the same sense as .odt files. I heard of one once in a forum, it had a website, but I can't remember it now. If not I may just try to use gedit syntax files or geany. thanks

    Read the article

  • Access DB with SQL Server Back End

    - by uyuni99
    I have an old Access application that has a lot of code in forms and reports. The database is getting too large and I am thinking of moving the back end to SQL Server. My requirements are as follows: The DB needs to be multiuser and the users (3-5) will need to log in over the web I would prefer not to re-write the forms and reports in ASP or some other web front end. When I think about my choices, I see them as: Have an Access ADP front end and allows remote log-in to the server where it is stored. Not sure if it is possible for 2 users to simultaneously log in Distribute an ADP front end to the users, but I am not sure if it is possible to connect to a SQL Server back end over the internet, and the network traffic may be an issue. Any other solution? I appreciate all help. u

    Read the article

  • Is there a work-around that allows missing data to equal NULL for LOAD DATA INFILE in MySQL?

    - by richardh
    I have a lot of large csv files with NULL values stored as ,, (i.e., no entry). After a lot of searching I found that this is a known "bug", although it may be a feature for some users. Is there a way that I can fix this on the fly without pre-processing? These data are all numeric, so a zero value is very different from NULL. Or if I have to do pre-processing, is there one that is most promising for dealing with tens of csv files of 100mb to 1gb? Thanks!

    Read the article

  • Options for header in raw byte file.

    - by Tim
    I have a large raw data file (up to 1GB) which contains raw samples from a USB data logger. I need to store extra information relating to the file (sample rate, description, trigger point, last seek position etc) and was looking into adding this as a some sort of header. The header file should ideally be human readable and flexible so I've so far ruled out some sort of binary serialization into a header. I also want to avoid two separate files as they could end up separated when copied or backed up. I remembered somebody telling me that newer *.*x Microsoft Office documents are actually a number of files in a zip. Is there a simple way to achieve this? Could I still keep the quick seek times to the raw file?

    Read the article

  • Config file format

    - by Felics
    Hello, does anyone knows a file format for configuration files easy to read by humans? I want to have something like tag = value where value may be: String Number(int or float) Boolean(true/false) Array(of String values, Number values, Boolean values) Another structure(it will be more clear what I mean in the fallowing example) Now I use something like this: IntTag=1 FloatTag=1.1 StringTag="a string" BoolTag=true ArrayTag1=[1 2 3] ArrayTag2=[1.1 2.1 3.1] ArrayTag3=["str1" "str2" "str3"] StructTag= { NestedTag1=1 NestedTag2="str1" } and so on. Parsing is easy but for large files I find it hard to read/edit in text editors. I don't like xml for the same reason, it's hard to read. INI does not support nesting and I want to be able to nest tags. I also don't want a complicated format because I will use limited kind of values as I mentioned above. Thanks for any help.

    Read the article

  • Archiving Database Tables using Java

    - by HonorGod
    My application demands archiving database tables between sybase and db2 and vice-a-versa and within(db2 to db2 and sybase to sybase) using java. I am trying to understand the best strategies around in terms performance, implementation, ease of use and scalability. Here is my current process - source and destination tables with the acceptable parameters (from java) are defined within xml. the application reads the source and destination configurations and execute them sequentially. destination is sometime optional when source is just deleting data from a specific table or when the source is just calling a stored procedure. dataset between source and destination is extremely large (in millions) From top of my head, it looks like I can define dependencies between multiple source and destination combination and have them execute in parallel in multiple treads. But will this improve any performance(i hope it will)? Are there any open-source frameworks for data archiving using java? Any other thoughts on the implements side will be really helpful. Thanks

    Read the article

  • FlockDB - What is it? And best cases for it uses.

    - by Guru
    Just came across FlockDB graph database. Details at github /flockDB. Twitter claims it uses FlockDB for the following: Twitter runs FlockDB on a large cluster of machines. we use it to store social graphs (who follows whom, who blocks whom) and secondary indices at twitter. At first glance, setup and trying it doesn't look straight forward. Have anyone already used it / setup this? If so, please answer the following general queries. What kind of applications is it better suited for? (Twitter claims it is simple and very rough, it remains to see what it meant though) How is FlockDB better than other graph db / noSQL db. Have you setup FlockDB, used it for a application? Early advices any? Note: I am evaluating the FlockDB and other graph databases mainly for learning them. Perhaps, I will build an application for that.

    Read the article

  • EXCEL import to sql returning NULL for decimals when in VARCHAR data type

    - by Daniel
    Hi, I am working on a peice of software which has expodentially grown over the last few years and the database needs to be regularly updated. Customers are providing us with data now on large spreadsheets which we format and will start importing into the database. I am using the Import and Export Data (32-bit) Wizard. One column in the database contains values like '1.1.1.2' etc and i am importing them in as a Varchar as that is the data type in the database. However, for values like '8.5', 'NULL' is getting imported insead. It only occurs when there is one decimal point. Is this a formatting error with excel or is it the wrong datatype?

    Read the article

  • Why has Foundation 4 made its grid classes less natural and readable?

    - by Brenden
    The Background I love responsive CSS grids. I hate Bootstrap's complex class names. I fell in love with Foundations human readable class names. The Problem With Foundation 4, they have changed four columns to large-4 small-4 columns and in my opinion this makes the HTML markup less clear. This style of CSS class names is exactly why I switched from Bootstrap to Foundation. The Question What advantage is gained by Foundation 4's Grid in making this change? It seems that you can have a different grid layout on smaller screens via media queries, but I can't think of a design that would require this. Note: I've been focused on native mobile development and therefore I may be missing out on recent best practices.

    Read the article

  • Java escape HTML - string replace slow?

    - by cpf
    Hi StackOverflow, I have a Java application that makes heavy use of a large file, to read, process and give through to SolrEmbeddedServer (http://lucene.apache.org/solr/). One of the functions does basic HTML escaping: private String htmlEscape(String input) { return input.replace("&", "&amp;").replace(">", "&gt;").replace("<", "&lt;") .replace("'", "&apos;").replaceAll("\"", "&quot;"); } While profiling the application, the program spends roughly 58% of the time in this function, a total of 47% in replace, and 11% in replaceAll. Now, is the Java replace that slow, or am I on the right path and should I consider the program efficient enough to have its bottleneck in Java and not in my code? (Or am I replacing wrong?) Thanks in advance!

    Read the article

  • Convert long number as string in the serialization

    - by Bruno
    I have a custom made class that use a long as ID. However, when I call my action using ajax, my ID is truncated and it loses the last 2 numbers because javascript loses precision when dealing with large numbers. My solution would be to give a string to my javascript, but the ID have to stay as a long on the server side. Is there a way to serialize the property as a string? I'm looking for some kind of attribute. Controller public class CustomersController : ApiController { public IEnumerable<CustomerEntity> Get() { yield return new CustomerEntity() { ID = 1306270928525862486, Name = "Test" }; } } Model public class CustomerEntity { public long ID { get; set; } public string Name { get; set; } } JSON Result [{"Name":"Test","ID":1306270928525862400}]

    Read the article

  • Return an object after parsing xml with SAX

    - by sentimental_turtle
    I have some large XML files to parse and have created an object class to contain my relevant data. Unfortunately, I am unsure how to return the object for later processing. Right now I pickle my data and moments later depickle the object for access. This seems wasteful, and there surely must be a way of grabbing my data without hitting the disk. def endElement(self, name): if name == "info": # done collecting this iteration self.data.setX(self.x) self.data.setY(self.y) elif name == "lastTagOfInterest": # done with file # want to return my object from here filehandler = open(self.outputname + ".pi", "w") pickle.dump(self.data, filehandler) filehandler.close() I have tried putting a return statement in my endElement tag, but that does not seem to get passed up the chain to where I call the SAX parser. Thanks for any tips.

    Read the article

  • SSIS - Limiting Concurrent Connections

    - by Bigtoe
    Hi Folks, I am using SSIS to connect to a legecy mainframe database and this allows only 5 concurrent connections at a time. I have a dataflow task with many tables to transfer and it kicks outs because of this limitation. I have split up the Data Flow task into seperate data flows and this is working for the moment, but it is not optiomal as they need to be sequenced and 1 large transfer in a flow is holding up subsequent transfers. Anyone any idea of how to limit the number of connections in a single data flow, I had a look at using the Engine Threads but this did not make any difference. Any help much appericated.

    Read the article

  • How can I clean up this SELECT query?

    - by Cruachan
    I'm running PHP 5 and MySQL 5 on a dedicated server (Ubuntu Server 8.10) with full root access. I'm cleaning up some LAMP code I've inherited and I've a large number of SQL selects with this type of construct: SELECT ... FROM table WHERE LCASE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE( strSomeField, ' ', '-'), ',', ''), '/', '-'), '&', ''), '+', '') ) = $somevalue Ignoring the fact that the database should never have been constructed to require such a select in the first place, and the $somevalue field will need to be parameterised to plug the gaping security hole, what is my best option for fixing the WHERE condition into something less offensive? If I was using MSSQL or Oracle I'd simply put together a user-defined function, but my experience with MySQL is more limited and I've not constructed a UDF with it before, although I'm happy coding C. Update: For all those who've already raised their eyebrows at this in the original code, $somevalue is actually something like $GET['product']—there are a few variations on the theme. In this case the select is pulling the product back from the database by product name—after stripping out characters so it matches what could be previously passed as a URI parameter.

    Read the article

  • Fastest way to store/retrieve a dictionary - SQL, text file...?

    - by AP257
    Hi all, This is a really really super dumb question, so I apologise, but I'd be grateful for some advice. I've got a text file of words and word frequencies. It's very large - theoretically we're talking millions of rows. I just want to retrieve values from the file, and do it as quickly and efficiently as possible (for a web app, in Django). My question is: what is the best way to store and retrieve the values? Should import them into SQL? Or keep the file and use grep? Or put them into a JSON dictionary...? Or some other way? Sorry for the dumb question, would be very grateful for advice!

    Read the article

  • Indexed key vs indexed separate columns, which one is faster ?

    - by Jerry
    In MYSQL, from a pure performance perspective, if I have a table with large amount of data with 10/1 read/write ratio. is it faster in read/write performance to have 4 search criteria in separate columns and all indexed or have them combined in to one single string acting as a key and store in one indexed column ? e.g. say this table with 5 columns, first name, last name, sex, country and file where the first four columns will ALWAYS be given as a part of search parameters in a search or have a table with two columns, key and file. where the value of key can be john-smith-male-australia ?? I don't quite get the pros and cons. the point I try to stress is the fact that all parameters will be given.in a search.

    Read the article

< Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >