Search Results

Search found 9124 results on 365 pages for 'big sal'.

Page 246/365 | < Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >

  • Best environment to port C/C++ code from Linux to Windows.

    - by Simone Margaritelli
    I'd like to make a big project of mine buildable on Windows platforms. The project itself it's written in C/C++ following POSIX standards, with some library dependencies such as libxml2, libcurl and so on. I'm more a Linux developer rather than a Windows developer, so i have no idea of which compiler suite i should use to port the code. Which one offers more compatibility with gcc 4.4.3 i'm using right now? My project needs flex and bison, is there any "ready to use" environment to port such projects to windows platforms? Thanks.

    Read the article

  • How set default opened JQuery UI tab another than first

    - by user568920
    I have big web application using JQuery UI Tabs. In central JS file I have setted all tabs. Using $("#tabs").tabs; But on one page I need to have selected another tab than first. If I use $("#tabs").{ selected: add }); (name of tab is #add) Its not running, probably because Tabs are already set up. Does anyone know how to set opened another than first tab (in default state - after loading page) if tabs are already turned on? I hope, you will understand, my English is pretty terrible.

    Read the article

  • why does InnoDB keep on growing without for every update?

    - by Akash Kava
    I have a table which consists of heavy blobs, and I wanted to conduct some tests on it. I know deleted space is not reclaimed by innodb, so I decided to reuse existing records by updating its own values instead of createing new records. But I noticed, whether I delete and insert a new entry, or I do UPDATE on existing ROW, InnoDB keeps on growing. Assuming I have 100 Rows, each Storing 500KB of information, My InnoDB size is 10MB, now when I call UPDATE on all rows (no insert/ no delete), the innodb grows by ~8MB for every run I do. All I am doing is I am storing exactly 500KB of data in each row, with little modification, and size of blob is fixed. What can I do to prevent this? I know about optimize table, but I cant do it because on regular usage, the table is going to be 60-100GB big, and running optimize will just stall entire server.

    Read the article

  • return type while using GMP.h header file

    - by Raveesh_Kumar
    While i m using gmp.h header file. i need a fuction which takes inputs of type mpz_t and return mpz_t type too. I m very beginner of using gmp.h So, Here is snaps follows of my approached code.... mpz_t sum_upto(mpz_t max) { mpz_t sum; mpz_init(sum); mpz_init(result); for(int i=0;i<=max-1;i++) mpz_add_ui(sum,sum,pow(2,i)); return sum; } but it will show error 1 ." pow has been not used in this scope.",although i have added math.h in very beginnig of the file. 2 . sum_upto declared as function returning an array... do tell as soon as possible. i have to complete my very big project....

    Read the article

  • How to make a UILabel which adjusts it's text to the upper left?

    - by mystify
    For some strange reason, in iPhone OS 3.0 this doesn't work: I made a big fullscreen UILabel with numberOfLines = 0 and baselineAdjustment = UIBaselineAdjustmentNone. It refuses to show the text in the upper left. It's always in the center of the bounding box, aligned to the left. The documentation says: UIBaselineAdjustmentNone Adjust text relative to the top-left corner of the bounding box. This is the default adjustment. Available in iPhone OS 2.0 and later. Probably a framework bug? I started with shiny new labels to test it. Text is centered.

    Read the article

  • [C++] which is better, throw an exception or return nonzero value?

    - by xis19
    While you are doing C++ programming, you have two choices of reporting an error. I suppose many teachers would suggest you throw an exception, which is derived from std::exception. Another way, which might be more "C" style, is to return a non-zero value, as zero is "ERROR_SUCCESS". Definitively, return an exception can provide much more information of the error and recovery; while the code will bloat a little bit, and making exception-safe in your mind is a little difficult for me, at least. Other way like returning something else, will make reporting an error much easier; the defect is that managing recovery will be a possibly big problem. So folks, as good programmers, which would be your preference, not considering your boss' opinion? For me, I would like to return some nonzero values.

    Read the article

  • Can JavaScript be overused?

    - by ledhed2222
    Hello stackoverflow, I'm a "long time reader first time poster", glad to start participating in this forum. My experience is with Java, Python, and several audio programming languages; I'm quite new to the big bad web technologies: HTML/CSS/JavaScript. I'm making two personal sites right now and am wondering if I'm relying on JavaScript too much. I'm making a site where all pages have a bit of markup in common--stuff like the nav bar and some sliced background images--so I thought I'd make a pageInit() function to insert the majority of the HTML for me. This way if I make a change later, I just change the script rather than all the pages. I figure if users are paranoid enough to have JavaScript turned off, I'll give them an alert or something. Is this bad practice? Can JavaScript be overused? Thanks in advance.

    Read the article

  • resize image without losing quality with Gimp

    - by madcoderz
    Hi, i have a bunch of images which are way too big i need to decrease their size from 30 kb to 10 or 5 kb without loosing quality. I tried to change the dpi and pixels with no succeed. The images got blurred, and as they have text i can't read anything after the changes. Is there anyway i can accomplish this without loosing quality? I have almost a dozen images in my application. Thanks in advance and have a nice day.

    Read the article

  • Python - read numbers from text file and put into list

    - by user1647372
    So like the title says im starting to learn some python and im having trouble picking up on this technique. What I need to accomplish is to read in some numbers and store them in a list. The text file looks like the following: 0 0 3 50 50 100 4 20 Basically these are coordinates and directions to be used for python's turtle to make shapes. I got that part down the only problem is getting them in a correct format. So what I can not figure out is how to get those numbers from the file into [ [0, 0, 3, 50], [50, 100, 4, 20] ] A list, with each four coordinates being a list in that one big list. Heres my attempt but it as I said I need some help - thank you. polyShape=[] infile = open(name,"r") num = int(infile.readline(2)) while num != "": polyShape.append(num) num = int(infile.readline(2)) infile.close()

    Read the article

  • SQL Server PIVOT on key-value table

    - by Zenox
    I have a table that has attributes based on a key-value. Example: CREATE TABLE ObjectAttributes ( int objectId, key nvarchar(64), value nvarchar(512) ) When I select from this I get: objectId key value ---------------------------- 1 Key 1 Value 1 1 Key 2 Value 2 I was wondering if I could use the PIVOT syntax to turn this into: objectId Key 1 Key 2 --------------------------- 1 Value 1 Value 2 I know all of my tables will have the same keys. (Unfortunately I cannot easily change the table structure. This is what is leading me to attempt using PIVOTS). The big issue here though is that pivots require an aggression function to be used. Is there a way to avert this? Am I completely wrong attempting this? Or is there a better solution?

    Read the article

  • Self Modifying Python? How can I redirect all print statements within a function without touching sys.stdout?

    - by Fake Name
    I have a situation where I am attempting to port some big, complex python routines to a threaded environment. I want to be able to, on a per-call basis, redirect the output from the function's print statement somewhere else (a logging.Logger to be specific). I really don't want to modify the source for the code I am compiling, because I need to maintain backwards compatibility with other software that calls these modules (which is single threaded, and captures output by simply grabbing everything written to sys.stdout). I know the best option is to do some rewriting, but I really don't have a choice here. Edit - Alternatively, is there any way I can override the local definition of print to point to a different function? I could then define the local print = system print unless overwritten by a kwarg, and would only involve modify a few lines at the beginning of each routine.

    Read the article

  • why can't you use the debug/release version of a lib interchangeably

    - by user205834
    In C++, most of the libs come in Debug/Release versions. Question 1. What are the big difference between Debug and Release versions (e.g. what advantages do you have using one versus the other). Question 2. A lib is just has an implementation of the functions, how does a function implementation change if you are using debug/release versions? Question 3. Can you ever build your app in debug mode and use a release version of a lib? Thanks.

    Read the article

  • JQuery-code on different content pages

    - by Ockonal
    Hello, I have a site with menu, which reloads content. Content is dynamically loadable, using ajax and php-script. At different content pages I need in different jquery-plugins. But If I'm writing including need plugin directly in some content page, I get a big lags during loading. So, now I'm includgin all plugins into main page... And it's not the best idea... Any suggestion? UPD: I have a div with content. Content is shown with effects (rolling up/down). When part of menu is clicked, I'm sending ajax-responce to the php script, which reads need text from database and returns it to the main page, from which I sent the responce. Than I'm pasting that content into a div and roll id down. So I'll include jquery-plugins in content pages, the animation of rolling will'nt be normal.

    Read the article

  • Multiple row insertion in a faster way

    - by Cyborgo
    Hi, I am using rails and I am trying to insert new rows into the table. The data comes from a text file and I created hash with one of the attribute as key. Then I find all the keys that are currently not in the table. Then I need to insert all of them into the table with all the attributes that I saved in hash. newvals.each{ |x| author = Author.create(:first => x, :second => hash[x][0], :third => hash[x][1]) author.save } This has lot of overhead as it makes an SQL update one at time. The newals array is usually quite big around the size of 20-30k. I know create can insert multiple objects at once, but I cant seem to figure out how. Is there any way to get this done. Thanks.

    Read the article

  • web service data type (contract)

    - by cyberguest
    hi, i have a general design question. we have a fairly big data model that represents an clinical object, the object itself has 200+ child attributes in the hierarchy. and we have a SetObject operation, and a GetObject operation. my question is, best practice wise, would it make sense to use that single data model in both operations or different data model for each? Because the Get operation will return much more details than what's needed for Set. an example of what i mean: the data model has say ProviderId, and ProviderName attributes, in the Get operation, both the ProviderId, and ProviderName would need to be returned. However, in the Set operation, only the ProviderId is needed, and ProviderName is ignored by the service since system has that information already. In this case, if the Get and Set operations use the same data model, the ProviderName is exposed even for Set operation, does that confuse the consuming developer?

    Read the article

  • Beginners PHP / mySQL question

    - by Reg H
    I'm brand new to PHP & MySQL, and one function I'm creating needs to access a large table or database. I've created the database and it's currently in a MySQL table, which I'm accessing with no problem. The table is 11,000 rows in length, with 8 columns (all text less than 8 characters long) - it's static, and will never change. Without getting too particular, my users will hit a button which will trigger scripts to access the data, say 500 times or more. So in general would it be better practice to include all of this data in a big 'switch' or 'if... then' conditional right in my scripts, rather than opening and accessing the database connection hundreds, or maybe even thousands of times? It just seems like that might be a bottleneck waiting to happen. Thanks!

    Read the article

  • Sphinx without using an auto_increment id

    - by squeeks
    I am current in planning on creating a big database (2+ million rows) with a variety of data from separate sources. I would like to avoid structuring the database around auto_increment ids to help prevent against sync issues with replication, and also because each item inserted will have a alphanumeric product code that is guaranteed to be unique - it seems to me more sense to use that instead. I am looking at a search engine to index this database with Sphinx looking rather appealing due to its design around indexing relational databases. However, looking at various tutorials and documentation seems to show database designs being dependent on an auto_increment field in one form or another and a rather bold statement in the documentation saying that document ids must be 32/64bit integers only or things break. Is there a way to have a database indexed by Sphinx without auto_increment fields as the id?

    Read the article

  • If I had to create a server farm what should I do? (how others companies work?)

    - by Michel
    I have a business idea but this is not really my field so I'm trying to understand what I'm really dealing with. I have a general understanding of how a web farm works, I'm a web developer and I understand the technology behind it, but I'm curious to know specifically how a web farm is set up and which professionals are involved. Things to consider probably are the hardware, the power supply, the software managment, the cooling system and if possible the usage of green technology/power recycling. Basically, can you explain me how a web hosting company is structured and how it works? A big question, I know, but from what I already read here about the subject a lot of you know this stuff. Thanks for your time guys!

    Read the article

  • How to predict result set row count?

    - by Saurabh Kumar
    I have an application where I create a big SQL query dynamically for SQL server 2008. This query is based on various search criteria which the user might give such as search by lastname, firstname, ssn etc. The requirement is that if the user gives a condition due to which the formed query might return a lot of rows(configurable for max N rows), then the application must send back a message instead to the user saying that he needs to refine his search query as the existing query will return too many rows. I would not want to bring back say, 5000 rows to the client and then discard that data just to show the user an error. What is an efficient way to tackle this issue?

    Read the article

  • What open source C/C++ audio compression options are there besides LAME MP3?

    - by Ole Jak
    Are there any C/C++ open source audio encoder besides LAME MP3? It doesn't need to be exactly mp3 format, I need a "compressed digital audio file". I do not want to use Lame because it is too big while no programmer can answer a simple question on it (share simple but easily downloadable and readable project containing only needed 2 simple functions... So I'm tired of searching for help with it.. I need something fresh powerful but more readable than this lib I found (mp3stego) ) "I don't want LAME because I am a fighter with its monopoly" Haha..

    Read the article

  • Hibernate HQL to basic SQL (no joins)

    - by CC
    Hello everybody, I working on a project with Hibernate and we need to replace Hibernate with some "home made persistence" stuff. The idea is that the project is big enough, and we have many HQL queries. The problem is with the queries like select a,b from table1, table2 on t1.table1=t2.table2 Basically all joins are not supported by our "hand made persistence" stuff. What I would need, is to be able to do some sort of transcoder, which will take as a input the HQL queries and output some SQL, but SQL without joins I hope you get the idea. My persistence layer does not supports joins. Does anybody has any idea about something like that? Some framework, or something? Thanks alot everybody. C.C.

    Read the article

  • How long would this have to go on for...

    - by Pieman
    I have the Pi formulae -Well one of them... 1 - 1/3 + 1/5 - 1/7 etc. How long would it take to get to like 1000 S.F correct? -Well, not how long, how big would the denominator be? -I have it updating 4 times in one refresh: http://zombiewrath.com/pi.php So the section above would be done in one refresh, then 7 to 13 in another etc. Answer this maths question please :) Also how can I get the 10,002 length variable onto 'seperate lines'? -I want it to fill 100% screen width -no scrolling needed (well downwards only)

    Read the article

  • Question about inserting/updating rows with MS SQL (ASP.NET MVC)

    - by Alex
    I have a very big table with a lot of rows, every row has stats for every user for certain days. And obviously I don't have any stats for future. So to update the stats I use UPDATE Stats SET Visits=@val WHERE ... a lot of conditions ... AND Date=@Today But what if the row doesn't exist? I'd have to use INSERT INTO Stats (...) VALUES (Visits=@val, ..., Date=@Today) How can I check if the row exists or not? Is there any way different from doing the COUNT(*)? If I fill the table with empty cells, it'd take hundreds of thousands of rows taking megabytes and storing no data.

    Read the article

  • How to roeder the rows of one matrix with respect to the other matrix?

    - by user2806363
    I have two big matrices A and B with diffrent dimensions.I want to order the rows of matrix B with respect to rows of the matrix A. and add the rows with values 0 to matrix B, if that row is not exist in B but in A Here is the reproduceable example and expected output: A<-matrix(c(1:40), ncol=8) rownames(A)<-c("B", "A", "C", "D", "E") > A [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] B 1 6 11 16 21 26 31 36 A 2 7 12 17 22 27 32 37 C 3 8 13 18 23 28 33 38 D 4 9 14 19 24 29 34 39 E 5 10 15 20 25 30 35 40 > B<-matrix(c(100:108),ncol=3) rownames(B)<-c("A", "E", "C") > B [,1] [,2] [,3] A 100 103 106 E 101 104 107 C 102 105 108 Here is the Expected output : >B [,1] [,2] [,3] B 0 0 0 A 100 103 106 C 102 105 108 D 0 0 0 E 101 104 107 > Would someone help me to implement this in R ?

    Read the article

  • CouchDB: How to change view function via javascript?

    - by osti
    Hello Guys, I am playing around with CouchDB to test if it is "possible" [1] to store scientific data (simulated and experimental raw data + metadata). A big pro is the schema-less approach of CouchDB: we have to be very flexible with the metadata, as the set of parameters changes very often. Up to now I have some code to feed raw data, plots (both as attachments), and hierarchical metadata (as JSON) into CouchDB documents, and have written some prototype Javascript for filtering and showing. But the filtering is done on the client side (a.k.a. browser): The map function simply returns everything. How could I change the (or push a second) map function of a specific _design-document with simple browser-JS? I do not think that a temporary view would yield any performance gain... Thanks for your time and answers. [1]: of course it is possible, but is it also useful? feasible? reasonable?

    Read the article

< Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >