Search Results

Search found 6172 results on 247 pages for 'limit choices to'.

Page 217/247 | < Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >

  • How to check if a position inside a std string exists ?? (c++)

    - by yox
    Hello, i have a long string variable and i want to search in it for specific words and limit text according to thoses words. Say i have the following text : "This amazing new wearable audio solution features a working speaker embedded into the front of the shirt and can play music or sound effects appropriate for any situation. It's just like starring in your own movie" and the words : "solution" , "movie". I want to substract from the big string (like google in results page): "...new wearable audio solution features a working speaker embedded..." and "...just like starring in your own movie" for that i'm using the code : for (std::vector<string>::iterator it = words.begin(); it != words.end(); ++it) { int loc1 = (int)desc.find( *it, 0 ); if( loc1 != string::npos ) { while(desc.at(loc1-i) && i<=80){ i++; from=loc1-i; if(i==80) fromdots=true; } i=0; while(desc.at(loc1+(int)(*it).size()+i) && i<=80){ i++; to=loc1+(int)(*it).size()+i; if(i==80) todots=true; } for(int i=from;i<=to;i++){ if(fromdots) mini+="..."; mini+=desc.at(i); if(todots) mini+="..."; } } but desc.at(loc1-i) causes OutOfRange exception... I don't know how to check if that position exists without causing an exception ! Help please!

    Read the article

  • File upload issue

    - by Varun
    I am working on a PHP based, ticket management system. While creating a ticket, one can upload an attachment. I want to put a limit (say 10 MB) per file upload. To implement this I plan the following- 1. In php.ini set post_max_size = 10M 2.In PHP script which receives the POST- Since the file is larger than post_max_size, $_FILES[] will be empty. But I can still check the content-length header and discard the upload, if size more than 10M. While testing this I tried uploading a file of 1 GB and analysed the http traffic and this is what I found. - the entire 1 GB data is first uploaded to a to the server temporarily and discarded once the http request completes. Though I couldn't exactly find out where the file was getting saved(as it was not there in the temporary directory in the server.), but my http traffic analyzer showed that the browser did send 1 GB data to the server. - the PHP script execution started only after completion of the http request(i.e after uploading the entire 1 GB) Now I have 2 concerns: a) People may exploit my server bandwidth by trying to upload large file, which I will have to discard anyways. b) Even worse, if someone starts uploading a huge file (say 100 GB), entire 100 GB data is first uploaded to the server temporarily, that means for that period, it will consume that much of memory on my server. What's the common solution for this. Am I missing something here?

    Read the article

  • Subquery with multiple results combined into a single field?

    - by Todd
    Assume I have these tables, from which i need to display search results in a browser: Table: Containers id | name 1 Big Box 2 Grocery Bag 3 Envelope 4 Zip Lock Table: Sale id | date | containerid 1 20100101 1 2 20100102 2 3 20091201 3 4 20091115 4 Table: Items id | name | saleid 1 Barbie Doll 1 2 Coin 3 3 Pop-Top 4 4 Barbie Doll 2 5 Coin 4 I need output that looks like this: itemid itemname saleids saledates containerids containertypes 1 Barbie Doll 1,2 20100101,20100102 1,2 Big Box, Grocery Bag 2 Coin 3,4 20091201,20091115 3,4 Envelope, Zip Lock 3 Pop-Top 4 20091115 4 Zip Lock The important part is that each item type only gets one record/row in the return on the screen. I accomplished this in the past by returning multiple rows of the same item and using a scripting language to limit the output. However, this makes the ui overly complicated and loopy. So, I'm hoping I can get the database to spit out only as many records as there are rows to display. This example may be a bit extreme because of the 2 joins needed to get to the container from the item (through the sale table). I'd be happy for just an example query that outputs this: itemid itemname saleids saledates 1 Barbie Doll 1,2 20100101,20100102 2 Coin 3,4 20091201,20091115 3 Pop-Top 4 20091115 I can only return a single result in a subquery, so I'm not sure how to do this.

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • Stable/repeatable random sort (MySQL, Rails)

    - by Matt Rogish
    I'd like to paginate through a randomly sorted list of ActiveRecord models (rows from MySQL database). However, this randomization needs to persist on a per-session basis, so that other people that visit the website also receive a random, paginate-able list of records. Let's say there are enough entities (tens of thousands) that storing the randomly sorted ID values in either the session or a cookie is too large, so I must temporarily persist it in some other way (MySQL, file, etc.). Initially I thought I could create a function based on the session ID and the page ID (returning the object IDs for that page) however since the object ID values in MySQL are not sequential (there are gaps), that seemed to fall apart as I was poking at it. The nice thing is that it would require no/minimal storage but the downsides are that it is likely pretty complex to implement and probably CPU intensive. My feeling is I should create an intersection table, something like: random_sorts( sort_id, created_at, user_id NULL if guest) random_sort_items( sort_id, item_id, position ) And then simply store the 'sort_id' in the session. Then, I can paginate the random_sorts WHERE sort_id = n ORDER BY position LIMIT... as usual. Of course, I'd have to put some sort of a reaper in there to remove them after some period of inactivity (based on random_sorts.created_at). Unfortunately, I'd have to invalidate the sort as new objects were created (and/or old objects being removed, although deletion is very rare). And, as load increases the size/performance of this table (even properly indexed) drops. It seems like this ought to be a solved problem but I can't find any rails plugins that do this... Any ideas? Thanks!!

    Read the article

  • Generating all possible subsets of a given QuerySet in Django

    - by Glen
    This is just an example, but given the following model: class Foo(models.model): bar = models.IntegerField() def __str__(self): return str(self.bar) def __unicode__(self): return str(self.bar) And the following QuerySet object: foobar = Foo.objects.filter(bar__lt=20).distinct() (meaning, a set of unique Foo models with bar <= 20), how can I generate all possible subsets of foobar? Ideally, I'd like to further limit the subsets so that, for each subset x of foobar, the sum of all f.bar in x (where f is a model of type Foo) is between some maximum and minimum value. So, for example, given the following instance of foobar: >> print foobar [<Foo: 5>, <Foo: 10>, <Foo: 15>] And min=5, max=25, I'd like to build an object (preferably a QuerySet, but possibly a list) that looks like this: [[<Foo: 5>], [<Foo: 10>], [<Foo: 15>], [<Foo: 5>, <Foo: 10>], [<Foo: 5>, <Foo: 15>], [<Foo: 10>, <Foo: 15>]] I've experimented with itertools but it doesn't seem particularly well-suited to my needs. I think this could be accomplished with a complex QuerySet but I'm not sure how to start.

    Read the article

  • Which C++ Standard Library wrapper functions do you use?

    - by Neil Butterworth
    This question, asked this morning, made me wonder which features you think are missing from the C++ Standard Library, and how you have gone about filling the gaps with wrapper functions. For example, my own utility library has this function for vector append: template <class T> std::vector<T> & operator += ( std::vector<T> & v1, const std::vector <T> & v2 ) { v1.insert( v1.end(), v2.begin(), v2.end() ); return v1; } and this one for clearing (more or less) any type - particularly useful for things like std::stack: template <class C> void Clear( C & c ) { c = C(); } I have a few more, but I'm interested in which ones you use? Please limit answers to wrapper functions - i.e. no more than a couple of lines of code.

    Read the article

  • sending large data .getJSON or proxy ?

    - by numerical25
    Hey guys. I was told that the only trick to sending data to a external server (i.e x-domain) is to use getJSON. Well my problem is that the data I am sending exceeds the getJSON data limit. I am tracking mouse movements on a screen for analytics. Another option is I could also send a little data at a time. probably every time the mouse moves. but that seems as if it would slow things down. I could setup a proxy server. My question is which would be better? Setting up a proxy server ? or Just sending bits of information via javascript or JQUERY. What do the professionals use (Google and other company's that build mash-ups that send a lot of data to x-domain sites.) I need to know the best practices. Thanx!! Also the data is put into JSON.

    Read the article

  • For each result in MySQL query, push to array (complicated)

    - by Dylan Taylor
    Okay, here's what I'm trying to do. I am running a MySQL query for the most recent posts. For each of the returned rows, I need to push the ID of the row to an array, then within that ID in the array, I need to add more data from the rows. A multi-dimensional array. Here's my code thus far. $query = "SELECT * FROM posts ORDER BY id DESC LIMIT 10"; $result = mysql_query($query); while($row = mysql_fetch_array($result)){ $id = $row["id"]; $post_title = $row["title"]; $post_text = $row["text"]; $post_tags = $row["tags"]; $post_category = $row["category"]; $post_date = $row["date"]; } As you can see I haven't done anything with arrays yet. Here's an ideal structure I'm looking for, just incase you're confused. The master array I guess you could call it. We'll just call this array $posts. Within this array, I have one array for each row returned in my MySQL query. Within those arrays there is the $post_title, $post_text, etc. How do I do this? I'm so confused.. an example would be really appreciated. -Dylan

    Read the article

  • AJAX/JSONP Question. Access id denied using IE while requesting corss domain.

    - by Sisir
    Ok, Here we go. I have already searched the Stack for the answer i have found some useful info but i want to clear up some more things. I also search the net for the answer but no real help. I have worked with some api (yelp, ouside.in). In yelp i use to inject the script to head with the url request to the api with a callback funcion. I worked fine in all browsers. But while using outside.in api when i call the url the callback in not working. In yelp they have a url field can be used like that callback=callbackfuncion so the callback will automatically called. But in outside.in there is not such field available. Is there are any standard command for callback function which will work regardless of any server/api? I also tried a standard ajax request using jQuery $.ajax() function. It worked for my local pc for both IE and other browser but did not working in IE showing the error: access denied, other borwser seems ok. Firebug in my FF also don't notice any errors. Outside.in has an javascript example but it is too hard to me to understand github.com/outsidein/api-examples/tree/master/javascript/browser/ site i am working: http://citystir.com yelp: yelp.com outside.in: outside.in Techniqual info: i am using: wampserver in local, wordpress for hosting, Godaddy, apache for remote with linux. Codes: Using Jquery $.ajax url is like: "http://hyperlocal-api.outside.in/v1.1/states/Illinois/cities/chicago/stories?dev_key="+key+"&sig="+signeture+"&limit=3 function makeOutsideRequest(url){ $.ajax({ url: url, dataType: 'json', type: 'GET', success: function (data, status, xhr) { if (data == null) { alert("An error occurred connecting to " + url + ". Please ensure that the server is running and configured to allow cross-origin requests."); }else{ printHomeNews(data); } }, error: function (xhr, status, error) { alert("An error occurred - check the server log for a stack trace."); } }); } Thanks!

    Read the article

  • Number of characters recommended for a statement

    - by liaK
    Hi, I have been using Qt 4.5 and so do C++. I have been told that it's a standard practice to maintain the length of each statement in the application to 80 characters. Even in Qt creator we can make a right border visible so that we can know whether we are crossing the 80 characters limit. But my question is, Is it really a standard being followed? Because in my application, I use indenting and all, so it's quite common that I cross the boundary. Other cases include, there might be a error statement which will be a bit explanatory one and which is in an inner block of code, so it too will cross the boundary. Usually my variable names look bit lengthier so as to make the names meaningful. When I call the functions of the variable names, again I will cross. Function names will not be in fewer characters either. I agree a horizontal scroll bar shows up and it's quite annoying to move back and forth. So, for function calls including multiple arguments, when the boundary is reached I will make the forth coming arguments in the new line. But besides that, for a single statement (for e.g a very long error message which is in double quotes " " or like longfun1()->longfun2()->...) if I use an \ and split into multiple lines, the readability becomes very poor. So is it a good practice to have those statement length restrictions? If this restriction in statement has to be followed? I don't think it depends on a specific language anyway. I added C++ and Qt tags since if it might. Any pointers regarding this are welcome.

    Read the article

  • Why is only 2 of my querys working? (PHP/Mysql)

    - by ggfan
    When I remove a posting, I want it to remove the posting itself, any comments related to it, and like/dislike of the comments, and favorites that people added. So there are 4 queries that do this, but somehow only 2 of the 4 are working. When I switch say query 2 for query 3, then query 2 works and query 3 doesn't... The table names are correct and it is getting the data via $_GET. // Grab the data from the GET $posting_id = $_GET['posting_id']; // Delete the posting from the database $query = "DELETE FROM posting WHERE posting_id=$posting_id LIMIT 1"; mysqli_query($dbc, $query) or die(mysql_error()); // Delete the comments $query2 = "DELETE FROM comments WHERE posting_id=$posting_id "; mysqli_query($dbc, $query2) or die(mysql_error()); // Delete the like/dislike $query3 = "DELETE FROM comment_likedislike WHERE posting_id=$posting_id "; mysqli_query($dbc, $query3) or die(mysql_error()); // Delete the favorites $query4 = "DELETE FROM favorite WHERE posting_id=$posting_id "; mysqli_query($dbc, $query4) or die(mysql_error());

    Read the article

  • Rails 3 fields_for agressive loading?

    - by Seth
    Hi all, I'm trying to optimize (limit) queries in a view. I am using the fields_for function. I need to reference various properties of the object, such as username for display purposes. However, this is a rel table, so I need to join with my users table. The result is N sub-queries, 1 for each field in fields_for. It's difficult to explain, but I think you'll understand what I'm asking if I paste my code: <%= form_for @election do |f| %> <%= f.fields_for :voters do |voter| %> <%= voter.hidden_field :id %> <%= voter.object.user.preferred_name %> <% end %> <% end %> I have like 10,000 users, and many times each election will include all 10,000 users. That's 10,000 subqueries every time this view is loaded. I want fields_for to JOIN on users. Is this possible? I'd like to do something like: ... <%= f.fields_for :voters, :joins => :users do |voter| %> ... <% end %> ... But that, of course, doesn't work :(

    Read the article

  • What's the "correct way" to organize this project?

    - by user571747
    I'm working on a project that allows multiple users to submit large data files and perform operations on them. The "backend" which performs these operations is written in Perl while the "frontend" uses PHP to load HTML template files and determines which content to deliver. Data is stored in a database (MySQL, SQLite, Oracle) and while there is data which has not yet been acted upon, Perl adds it to a running queue which delivers data to other threads based on system load. In addition, there may be pre- and post-processing of the data before and after the main Perl script operates (the specifications are unclear) so I may want to allow these processors to be user-selectable plugins. I had been writing this project in a more procedural fashion but I am quickly realizing the benefit of separating concerns as to limit the scope one change has on the rest of the project. I'm quite unexperienced with design patterns and am curious what the best way to proceed is. I've heard MVC thrown around quite a bit but I am unsure of how to apply it. Specifically, what are some good options to structure this code (in terms of design patterns and folder hierarchy)? How can I achieve this with both PHP and Perl while minimizing duplicated code between languages? Should I keep my PHP files in the top level so I don't have ugly paths in the URL? Also, if I want to provide interchangeable databases, does each table need its own DAO implementation?

    Read the article

  • Caching queries in Django

    - by dolma33
    In a django project I only need to cache a few queries, using, because of server limitations, a cache table instead of memcached. One of those queries looks like this: Let's say I have a Parent object, which has a lot of Child objects. I need to store the result of the simple query parent.childs.all(). I have no problem with that, and everything works as expected with some code like key = "%s_children" %(parent.name) value = cache.get(key) if value is None: cache.set(key, parent.children.all(), CACHE_TIMEOUT) value = cache.get(key) But sometimes, just sometimes, the cache.set does nothing, and, after executing cache.set, cache.get(key) keeps returning None. After some test, I've noticed that cache.set is not working when parent.children.all().count() has higher values. That means that if I'm storing inside of key (for example) 600 children objects, it works fine, but it wont work with 1200 children. So my question is: is there a limit to the data that a key could store? How can I override it? Second question: which way is "better", the above code, or the following one? key = "%s_children" %(parent.name) value = cache.get(key) if value is None: value = parent.children.all() cache.set(key, value, CACHE_TIMEOUT) The second version won't cause errors if cache.set doesn't work, so it could be a workaround to my issue, but obviously not a solution. In general, let's forget about my issue, which version would you consider "better"?

    Read the article

  • Get a unique data in a SQL query

    - by Jensen
    Hi, I've a database who contain some datas in that form: icon(name, size, tag) (myicon.png, 16, 'twitter') (myicon.png, 32, 'twitter') (myicon.png, 128, 'twitter') (myicon.png, 256, 'twitter') (anothericon.png, 32, 'facebook') (anothericon.png, 128, 'facebook') (anothericon.png, 256, 'facebook') So as you see it, the name field is not uniq I can have multiple icons with the same name and they are separated with the size field. Now in PHP I have a query that get ONE icon set, for example : $dbQueryIcons = mysql_query("SELECT * FROM pl_icon WHERE tag LIKE '%".$SEARCH_QUERY."%' GROUP BY name ORDER BY id DESC LIMIT ".$firstEntry.", ".$CONFIG['icon_per_page']."") or die(mysql_error()); With this example if $tag contain 'twitter' it will show ONLY the first SQL data entry with the tag 'twitter', so it will be : (myicon.png, 16, 'twitter') This is what I want, but I would prefer the '128' size by default. Is this possible to tell SQL to send me only the 128 size when existing and if not another size ? In an another question someone give me a solution with the GROUP BY but in this case that don't run because we have a GROUP BY name. And if I delete the GROUP BY, it show me all size of the same icons. Thanks !

    Read the article

  • Why does PDO print my password when the connection fails?

    - by Joe Hopfgartner
    I have a simple website where I establish a connection to a Mysql server using PDO. $dbh = new PDO('mysql:host=localhost;dbname=DB;port=3306', 'USER', 'SECRET',array(PDO::MYSQL_ATTR_INIT_COMMAND => "SET NAMES utf8")); I had some traffic on my site and the servers connection limit was reached, and the website throw this error, with my PLAIN password in it! Fatal error: Uncaught exception 'PDOException' with message 'SQLSTATE[08004] [1040] Too many connections' in /home/premiumize-me/html/index.php:64 Stack trace: #0 /home/premiumize-me/html/index.php(64): PDO-__construct('mysql:host=loca...', 'USER', 'SECRET', Array) #1 {main} thrown in /home/premiumize-me/html/index.php on line 64 Ironically I switched to PDO for security reasons, this really shocked me. Because this exact error is something you can provoke very easily on most sites using simple http flooding. I now wrapped my conenction into a try/catch clause, but still. I think this is catastrophic! So I am new to PDO and my questino is: What do I have to consider to be safe! How to I establish a connection in a secure way? Are there other known security holes like this one that I have to be aware of?

    Read the article

  • Sql Alchemy Duplicated Commit

    - by PythonWolf
    Good Morning i'm currently facing a problem in my Cherrypy application. Im my own custom session module , anyway when performing session.add() The exact same object gets updated Twice. cherrypy.request.SessionManager.user_data = user try: db_session.add(cherrypy.request.SessionManager) db_session.commit() Will Return 2011-06-21 09:16:48,991 INFO sqlalchemy.engine.base.Engine.0x...04cL BEGIN (implicit) 2011-06-21 09:16:49,015 INFO sqlalchemy.engine.base.Engine.0x...04cL SELECT ..... FROM "Clients_Users" WHERE "Clients_Users".username = %(username_1)s AND "Clients_Users".password = %(password_1)s LIMIT 1 OFFSET 0 2011-06-21 09:16:49,015 INFO sqlalchemy.engine.base.Engine.0x...04cL {'password_1': '123', 'username_1': u'1'} 2011-06-21 09:16:49,047 INFO sqlalchemy.engine.base.Engine.0x...04cL UPDATE "SYS_Sessions" SET user_data=%(user_data)s WHERE "SYS_Sessions".id = %(SYS_Sessions_id)s 2011-06-21 09:16:49,067 INFO sqlalchemy.engine.base.Engine.0x...04cL {'SYS_Sessions_id': 92L, 'user_data': } 2011-06-21 09:16:49,071 INFO sqlalchemy.engine.base.Engine.0x...04cL COMMIT 2011-06-21 09:16:49,093 INFO sqlalchemy.engine.base.Engine.0x...04cL BEGIN (implicit) 2011-06-21 09:16:49,095 INFO sqlalchemy.engine.base.Engine.0x...04cL UPDATE "SYS_Sessions" SET user_data=%(user_data)s WHERE "SYS_Sessions".id = %(SYS_Sessions_id)s 2011-06-21 09:16:49,095 INFO sqlalchemy.engine.base.Engine.0x...04cL {'SYS_Sessions_id': 92L, 'user_data': } 2011-06-21 09:16:49,108 INFO sqlalchemy.engine.base.Engine.0x...04cL COMMIT As Anyone seen this before ? P.S This doesn't happen in the rest of the modules i have made.

    Read the article

  • Why does this SELECT ... JOIN statement return no results?

    - by Stephen
    I have two tables: 1. tableA is a list of records with many columns. There is a timestamp column called "created" 2. tableB is used to track users in my application that have locked a record in tableA for review. It consists of four columns: id, user_id, record_id, and another timestamp collumn. I'm trying to select up to 10 records from tableA that have not been locked by for review by anyone in tableB (I'm also filtering in the WHERE clause by a few other columns from tableA like record status). Here's what I've come up with so far: SELECT tableA.* FROM tableA LEFT OUTER JOIN tableB ON tableA.id = tableB.record_id WHERE tableB.id = NULL AND tableA.status = 'new' AND tableA.project != 'someproject' AND tableA.created BETWEEN '1999-01-01 00:00:00' AND '2010-05-06 23:59:59' ORDER BY tableA.created ASC LIMIT 0, 10; There are currently a few thousand records in tableA and zero records in tableB. There are definitely records that fall between those timestamps, and I've verified this with a simple SELECT * FROM tableA WHERE created BETWEEN '1999-01-01 00:00:00' AND '2010-05-06 23:59:59' The first statement above returns zero rows, and the second one returns over 2,000 rows.

    Read the article

  • Entity Framework: Detect DBSchema for licensing

    - by Program.X
    We're working on a product that may or may not have differing license schema for different databases. In particular, a lower-tier product would run on SQLExpress, but we don't want the end user to be able to use a "full-fat" SQL install - benefiting from the price cut. Clearly this must also be the case for other DBs, so Oracle may command a higher price than SQL, for instance (hypothetically). We're using Entity Framework. Obviously this hides all the neatness of accessing the core schema and using sp_version or whatever it is. We'd rather not pre-load the condition by running a series of SQL commands (one for each platform) and see what comes back, as this would limit our DB options. But if necassary, we're prepared to do it. So, is it possible to get this using EF itself? DBContext.COnnection.ServerVersion only returns something like "9.00.1234" (for SQL Server 2005). I would assume (though haven't yet checked - need to install an instance) SQLExpress would return something similar - "pretending" it is full-fat. Obviously, we have no Oracle/MySQL/etc. instance so can't establish whether that returns text "Oracle" or whatever.

    Read the article

  • "text-overflow: ellipsis" not working well in firefox with floating element around

    - by Freedom
    see jsfiddle: http://jsfiddle.net/9v8faLeh/1/ I have two elements .text and .badge in a .container with a limit width: <div class="container"> <span class="badge">(*)</span> <span class="text">this is a long long long long text.</span> </div> the .badge element may not exist in a .container according to the data. if a .badge exist, I want the .badge element to float to right. and if the .text is too long, the text should ellipsis. .container { width: 150px; border: 1px solid; padding: 5px; white-space: nowrap; text-overflow: ellipsis; overflow: hidden; } .badge { float: right; margin-left: 5px; } if you open the jsfiddle link in Chrome or IE, it displays correctly as my expectation. but if open in Firefox, the .text and .badge are overlay if the text is so long. I don't want to use any JavaScript. how can I achieve the same result in FireFox?

    Read the article

  • Programming logic best practice - redundant checks

    - by eldblz
    I'm creating a large PHP project and I've a trivial doubt about how to proceed. Assume we got a class books, in this class I've the method ReturnInfo: function ReturnInfo($id) { if( is_numeric($id) ) { $query = "SELECT * FROM books WHERE id='" . $id . "' LIMIT 1;"; if( $row = $this->DBDrive->ExecuteQuery($query, $FetchResults=TRUE) ) { return $row; } else { return FALSE; } } else { throw new Exception('Books - ReturnInfo - id not valid.'); } } Then i have another method PrintInfo function PrintInfo($id) { print_r( $this->ReturnInfo($id) ); } Obviously the code sample are just for example and not actual production code. In the second method should I check (again) if id is numeric ? Or can I skip it because is already taken care in the first method and if it's not an exception will be thrown? Till now I always wrote code with redundant checks (no matter if already checked elsewhere i'll check it also here) Is there a best practice? Is just common sense? Thank you in advance for your kind replies.

    Read the article

  • Extended Zend_Db_Table_Row_Abstract does not return values

    - by WesleyE
    Hi, I'm quite new to Zend and the database classes from it. I'm having problems mapping a Zend_Db_Table_Row_Abstract to my rows. The problem is that whenever I try to map it to a class (Job) that extends the Zend_Db_Table_Row_Abstract class, the database data is not receivable anymore. I'm not getting any errors, trying to get data simply returns null. Here is my code so far: Jobs: class Jobs extends Zend_Db_Table_Abstract { protected $_name = 'jobs'; protected $_rowsetClass = "Job"; public function getActiveJobs() { $select = $this->select()->where('jobs.jobDateOpen < UNIX_TIMESTAMP()')->limit(15,0); $rows = $this->fetchAll($select); return $rows; } } Job: class Job extends Zend_Db_Table_Row_Abstract { public function getCompanyName() { //Gets the companyName for this row (Is on another table), just for example } } Controller: $oJobs = new Jobs(); $aActiveJobs = $oJobs->getActiveJobs(); foreach ($aActiveJobs as $value) { var_dump($value->jobTitle); } When I remove the "protected $_rowsetClass = "Job";" line, so that the table row is not mapped to my own class, I get all the jobTitles perfectly. What am I doing wrong here? Thanks in advance, Wesley

    Read the article

  • Trying to build a dynamic PHP mysql_query string to update a row and getting back the updated row

    - by adardesign
    I have a form that jQuery tracks the onChage .change() event so when something is changed it runs a ajax request and i pass in the column, id, and the values in the url. Here i have the PHP code that should update the data. My question is now how do i build the mySQl string dynamically. and how do i echo back the changes/updates that where just changed on the db. Here is the PHP code i am trying to work with. <?php require_once('Connections/connect.php'); ?> <?php $id = $_GET['id']; $collumn = $_GET['collumn']; $val = $_GET['val']; ?> <?php mysql_select_db($myDB, $connection); // here i try to build the query string and pass in the passed in values $sqlUpdate = 'UPDATE `plProducts`.`allPens` SET `$collumn` = '$val' WHERE `allPens`.`prodId` = '$id' LIMIT 1;'; // here i want to echo back the updated row (or the updated data) $seeResults = mysql_query($sqlUpdate, $connection); echo $seeResults ?>

    Read the article

  • Format textfield while typing Ios

    - by BBios
    Ive seen other examples and tried - (BOOL)textField:(UITextField *)textField shouldChangeCharactersInRange:(NSRange)range replacementString:(NSString *)string I am not sure what I am doing wrong, I just started coding my first Iphone App This is what I am trying to do I have 4 textfields and each has a limit on number of letters while typing I have done this using the below code - (BOOL)textField:(UITextField *)textField shouldChangeCharactersInRange:(NSRange)range replacementString:(NSString *)string { int valid; NSString *cs2 = [textField.text stringByReplacingCharactersInRange:range withString:string]; // int charCount = [cs2 length]; if(textField == cvv){ valid = 4; }else if(textField == cardName) { valid=26; }else if(textField == expDate) { valid=5; // if (charCount == 2 ) { // textField.text = [cs2 stringByAppendingString:@"/"]; // textField.text = cs2; // return YES; // } }else if(textField == acNumber) { valid=19; } return !([cs2 length] > valid); Works fine till here, I have a textfield where the user enters Exp date and would like to format it as if I am entering 112 then it should display as 01/12 and if I enter 2 then it should display as 1122 I tried checking if the length of the textfield value is 2 then append a / but then that gives me when I enter 12 it gives 11/22

    Read the article

< Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >