Search Results

Search found 1999 results on 80 pages for 'temporary'.

Page 23/80 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Fastest Java way to remove the first/top line of a file (like a stack)

    - by christangrant
    I am trying to improve an external sort implementation in java. I have a bunch of BufferedReader objects open for temporary files. I repeatedly remove the top line from each of these files. This pushes the limits of the Java's Heap. I would like a more scalable method of doing this without loosing speed because of a bunch of constructor calls. One solution is to only open files when they are needed, then read the first line and then delete it. But I am afraid that this will be significantly slower. So using Java libraries what is the most efficient method of doing this. --Edit-- For external sort, the usual method is to break a large file up into several chunk files. Sort each of the chunks. And then treat the sorted files like buffers, pop the top item from each file, the smallest of all those is the global minimum. Then continue until for all items. http://en.wikipedia.org/wiki/External_sorting My temporary files (buffers) are basically BufferedReader objects. The operations performed on these files are the same as stack/queue operations (peek and pop, no push needed). I am trying to make these peek and pop operations more efficient. This is because using many BufferedReader objects takes up too much space.

    Read the article

  • PHP + CURL How to get file name

    - by Gunjan
    I'm trying to download users profile picture from facebook in PHP using this function public static function downloadFile($url, $options = array()) { if (!is_array($options)) $options = array(); $options = array_merge(array( 'connectionTimeout' => 5, // seconds 'timeout' => 10, // seconds 'sslVerifyPeer' => false, 'followLocation' => true, // if true, limit recursive redirection by 'maxRedirs' => 2, // setting value for "maxRedirs" ), $options); // create a temporary file (we are assuming that we can write to the system's temporary directory) $tempFileName = tempnam(sys_get_temp_dir(), ''); $fh = fopen($tempFileName, 'w'); $curl = curl_init($url); curl_setopt($curl, CURLOPT_FILE, $fh); curl_setopt($curl, CURLOPT_CONNECTTIMEOUT, $options['connectionTimeout']); curl_setopt($curl, CURLOPT_TIMEOUT, $options['timeout']); curl_setopt($curl, CURLOPT_HEADER, false); curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, $options['sslVerifyPeer']); curl_setopt($curl, CURLOPT_FOLLOWLOCATION, $options['followLocation']); curl_setopt($curl, CURLOPT_MAXREDIRS, $options['maxRedirs']); curl_exec($curl); curl_close($curl); fclose($fh); return $tempFileName; } The problem is it saves the file in the /tmp directory with a random name and without the extension. How can I get the original name of the file (I'm more interested in the original extension) The important things here are: The url actually redirects to image so i cant get it from original url the final url does not have the file name in headers

    Read the article

  • Parse a CSV file using python (to make a decision tree later)

    - by Margaret
    First off, full disclosure: This is going towards a uni assignment, so I don't want to receive code. :). I'm more looking for approaches; I'm very new to python, having read a book but not yet written any code. The entire task is to import the contents of a CSV file, create a decision tree from the contents of the CSV file (using the ID3 algorithm), and then parse a second CSV file to run against the tree. There's a big (understandable) preference to have it capable of dealing with different CSV files (I asked if we were allowed to hard code the column names, mostly to eliminate it as a possibility, and the answer was no). The CSV files are in a fairly standard format; the header row is marked with a # then the column names are displayed, and every row after that is a simple series of values. Example: # Column1, Column2, Column3, Column4 Value01, Value02, Value03, Value04 Value11, Value12, Value13, Value14 At the moment, I'm trying to work out the first part: parsing the CSV. To make the decisions for the decision tree, a dictionary structure seems like it's going to be the most logical; so I was thinking of doing something along these lines: Read in each line, character by character If the character is not a comma or a space Append character to temporary string If the character is a comma Append the temporary string to a list Empty string Once a line has been read Create a dictionary using the header row as the key (somehow!) Append that dictionary to a list However, if I do things that way, I'm not sure how to make a mapping between the keys and the values. I'm also wondering whether there is some way to perform an action on every dictionary in a list, since I'll need to be doing things to the effect of "Everyone return their values for columns Column1 and Column4, so I can count up who has what!" - I assume that there is some mechanism, but I don't think I know how to do it. Is a dictionary the best way to do it? Would I be better off doing things using some other data structure? If so, what?

    Read the article

  • 503 server response for Googlebot

    - by Hallik
    I put an .htaccess file in my webroot with the following contents RewriteBase / RewriteCond %{HTTP_USER_AGENT} ^.*(Googlebot|Googlebot|Mediapartners|Adsbot|Feedfetcher)-?(Google|Image)? [NC] RewriteRule .* /var/www/503.html This website is in maintenance mode, and I don't want anything indexed yet. I tested the code with a firefox User-Agent switcher plugin, and looking at the access log it shows this at the end of each log entry, but watching in TamperData or Firebug, it still returns a 200 server response instead of a 503. What am I doing wrong? "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" contents of /var/www/503.html <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2//EN"> <html> <head> <title>503 - Service temporary unavailable</title> </head> <body> <h1>503 - Service temporary unavailable</h1> <p>Sorry, this website is currently down for maintainance please retry later</p> </body> </html> I get this in my error log. LogLevel debug, would that go into the vhost in a specific place? Every answer I see on google is something different. Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace.

    Read the article

  • I want just the insert query for a temp table.

    - by John Stephen
    Hi..I am using C#.Net and Sql Server ( Windows Application ). I had created a temporary table. When a button is clicked, temporary table (#tmp_emp_details) is created. I am having another button called "insert Values" and also 5 textboxes. The values that are entered in the textbox are used and whenever com.ExecuteNonQuery(); line comes, it throws an error message Invalid object name '#tbl_emp_answer'.. Below is the set of code.. Please give me a solution. Code for insert (in insert value button): private void btninsertvalues_Click(object sender, EventArgs e) { username = txtusername.Text; examloginid = txtexamloginid.Text; question = txtquestion.Text; answer = txtanswer.Text; useranswer = txtanswer.Text; SqlConnection con = new SqlConnection("Data Source=.;Initial Catalog=tempdb;Integrated Security=True;"); SqlCommand com = new SqlCommand("Insert into #tbl_emp_answer values('"+username+"','"+examloginid+"','"+question+"','"+answer+"','"+useranswer+"')", con); con.Open(); com.ExecuteNonQuery(); con.Close(); }

    Read the article

  • I want a insert query for a temp table

    - by John Stephen
    Hi..I am using C#.Net and Sql Server ( Windows Application ). I had created a temporary table. When a button is clicked, temporary table (#tmp_emp_answer) is created. I am having another button called "insert Values" and also 5 textboxes. The values that are entered in the textbox are used and whenever com.ExecuteNonQuery(); line comes, it throws an error message Invalid object name '#tbl_emp_answer'.. Below is the set of code.. Please give me a solution. Code for insert (in insert value button): private void btninsertvalues_Click(object sender, EventArgs e) { username = txtusername.Text; examloginid = txtexamloginid.Text; question = txtquestion.Text; answer = txtanswer.Text; useranswer = txtanswer.Text; SqlConnection con = new SqlConnection("Data Source=.;Initial Catalog=tempdb;Integrated Security=True;"); SqlCommand com = new SqlCommand("Insert into #tbl_emp_answer values('"+username+"','"+examloginid+"','"+question+"','"+answer+"','"+useranswer+"')", con); con.Open(); com.ExecuteNonQuery(); con.Close(); }

    Read the article

  • Sql Compact and __sysobjects

    - by Scott Wisniewski
    I have some SQL Compact queries that create tables inside of transaction. This is mainly because I need to simulate temporary tables, which SQL Compact does not support. I do this by creating a real table, and then dropping it at the end of the transaction. This mostly works. Sometimes, however, when creating the tables Sql Compact will try to acquire PAGE level locks on the __sysobjects table. If there are several concurrent queries running that create "temp" tables, the attempt to acquire a page lock can result in a dead lock followed by a SqlLockTimeout exception. For normal tables I could fix this using a "with (rowlock)" hint. However, because I'm not writing the query to insert into __sysobjets (SQL server does that in response to "create table") I can't do this. Does anyone know of a way I could get around this? I've thought about pulling the table creation out of the transaction, but that opens up the possibility of phantom temporary tables that I'd then need to clean up regularly. Ideally I'd like to avoid that if possible.

    Read the article

  • Currently using View, Should I use a hard table instead?

    - by 1001010101
    I am currently debating whether my table, mapping_uGroups_uProducts, which is a view formed by the following table: CREATE ALGORITHM=UNDEFINED DEFINER=`root`@`localhost` SQL SECURITY DEFINER VIEW `db`.`mapping_uGroups_uProducts` AS select distinct `X`.`upID` AS `upID`,`Z`.`ugID` AS `ugID` from ((`db`.`mapping_uProducts_Products` `X` join `db`.`productsInfo` `Y` on((`X`.`pID` = `Y`.`pID`))) join `db`.`mapping_uGroups_Groups` `Z` on((`Y`.`gID` = `Z`.`gID`))); My current query is: SELECT upID FROM uProductsInfo \ JOIN fs_uProducts USING (upID) column \ JOIN mapping_uGroups_uProducts USING (upID) -- could be faster if we use hard table and index \ JOIN mapping_fs_key USING (fsKeyID) \ WHERE fsName="OVERALL" \ AND ugID=1 \ ORDER BY score DESC \ LIMIT 0,30; which is pretty slow. (for 30 results, it requires about 10 secondes). I think the reason for my query being so slow is definitely due to the fact that that particular query relies on a VIEW which has no index to speed things up. +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ | 1 | PRIMARY | mapping_fs_key | const | PRIMARY,fsName | fsName | 386 | const | 1 | Using temporary; Using filesort | | 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 19706 | Using where | | 1 | PRIMARY | uProductsInfo | eq_ref | PRIMARY | PRIMARY | 4 | mapping_uGroups_uProducts.upID | 1 | Using index | | 1 | PRIMARY | fs_uProducts | ref | upID | upID | 4 | db.uProductsInfo.upID | 221 | Using where | | 2 | DERIVED | X | ALL | PRIMARY | NULL | NULL | NULL | 40772 | Using temporary | | 2 | DERIVED | Y | eq_ref | PRIMARY | PRIMARY | 4 | db.X.pID | 1 | Distinct | | 2 | DERIVED | Z | ref | PRIMARY | PRIMARY | 4 | db.Y.gID | 2 | Using index; Distinct | +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ 7 rows in set (0.48 sec) The explain here looks pretty cryptic, and I don't know whether I should drop view and write a script to just insert everything in the view to a hard table. ( obviously, it will lose the flexibility of the view since the mapping changes quite frequently). Does anyone have any idea to how I can optimize my schema better?

    Read the article

  • Chrome renders button links completely screwed up when placed inside a paragraph

    - by Ferdy
    I am fairly proficient in CSS but now I am running into a very strange rendering issue in Google Chrome 9. I am trying to create some fancy looking link buttons (basically heavily styled anchors). Here is some example markup: <a href="" class="button"> <figure class="sprite icon icon_back"></figure> Link button with icon</a> This markup may look a litte strange to you, there's a few things you should know: I am using HTML5's figure class to include an icon as part of the button. I have the proper reset CSS applied and Chrome can render this tag for sure. Instead of actually pointing to an image I am applying CSS classes to the figure element. Within the CSS I am using the spriting technique to show the correct portion of a single large sprite image. All of this is working fine in Firefox, and actually also in Chrome. The correct rendering can be seen in the following image: It renders like that in both Firefox and Chrome. Here comes the problem, if I place such a button within paragraph tags <p></p> this is what happens in Chrome only: Notice how the button is ripped apart? Only in Chrome and only when placed inside a paragraph. It gets even stranger: this only happens for the first button inside the paragraph, if I would place three buttons inside a paragraph, only the 1st one is screwed up. Your first question would probably be about the CSS. It is rather verbose so hereby a temporary link to the page in question: Edit: link to live page removed, was only temporary for problem inspection.

    Read the article

  • Why would using a Temp table be faster than a nested query?

    - by Mongus Pong
    We are trying to optimise some of our queries. One query is doing the following: SELECT t.TaskID, t.Name as Task, '' as Tracker, t.ClientID, (<complex subquery>) Date, INTO [#Gadget] FROM task t SELECT TOP 500 TaskID, Task, Tracker, ClientID, dbo.GetClientDisplayName(ClientID) as Client FROM [#Gadget] order by CASE WHEN Date IS NULL THEN 1 ELSE 0 END , Date ASC DROP TABLE [#Gadget] (I have removed the complex subquery, cos I dont think its relevant other than to explain why this query has been done as a two stage process.) Now I would have thought it would be far more efficient to merge this down into a single query using subqueries as : SELECT TOP 500 TaskID, Task, Tracker, ClientID, dbo.GetClientDisplayName(ClientID) FROM ( SELECT t.TaskID, t.Name as Task, '' as Tracker, t.ClientID, (<complex subquery>) Date, FROM task t ) as sub order by CASE WHEN Date IS NULL THEN 1 ELSE 0 END , Date ASC This would give the optimiser better information to work out what was going on and avoid any temporary tables. It should be faster. But it turns out it is a lot slower. 8 seconds vs under 5 seconds. I cant work out why this would be the case as all my knowledge of databases imply that subqueries would always be faster than using temporary tables. Can anyone explain what could be going on!?!?

    Read the article

  • Convenient way to do "wrong way rebase" in git?

    - by Kaz
    I want to pull in newer commits from master into topic, but not in such a way that topic changes are replayed over top of master, but rather vice versa. I want the new changes from master to be played on top of topic, and the result to be installed as the new topic head. I can get exactly the right object if I rebase master to topic, the only problem being that the object is installed as the new head of master rather than topic. Is there some nice way to do this without manually shuffling around temporary head pointers? Edit: Here is how it can be achieved using a temporary branch head, but it's clumsy: git checkout master git checkout -b temp # temp points to master git rebase topic # topic is brought into temp, temp changes played on top Now we have the object we want, and it's pointed at by temp. git checkout topic git reset --hard temp Now topic has it; and all that is left is to tidy up by deleting temp: git branch -d temp Another way is to to do away with temp and just rebase master, and then reset topic to master. Finally, reset master back to what it was by pulling its old head from the reflog, or a cut-and-paste buffer.

    Read the article

  • Powershell: splatting after passing hashtable by reference

    - by user1815871
    Powershell newbie ... I recently learned about splatting — very useful. I ran into a snag when I passed a hash table by reference to a function for splatting purposes. (For brevity's sake — a silly example.) Function AllMyChildren { param ( [ref]$ReferenceToHash } get-childitem @ReferenceToHash.Value # etc.etc. } $MyHash = @{ 'path' = '*' 'include' = '*.ps1' 'name' = $null } AllMyChildren ([ref]$MyHash) Result: an error ("Splatted variables cannot be used as part of a property or array expression. Assign the result of the expression to a temporary variable then splat the temporary variable instead."). Tried this afterward: $newVariable = $ReferenceToHash.Value get-childitem @NewVariable That did work and seemed right per the error message. But: is it the preferred syntax in a case like this? (An oh, look, it actually worked solution isn't always a best practice. My approach here strikes me as "Perl-minded" and perhaps in Powershell passing by value is better, though I don't yet know the syntax for it w.r.t. a hash table.)

    Read the article

  • php code works with mamp but not on ubuntu server

    - by user355510
    Hello, I have start looking at a twitter php library http://github.com/abraham/twitteroauth, but i can't get it to work on my ubuntu server, but on my mac, with mamp it works without any problems. This is the code that don't won't to work on my server, but in mamp. Yes i have edit config file <?php /* Start session and load library. */ session_start(); require_once('twitteroauth/twitteroauth.php'); require_once('config.php'); /* Build TwitterOAuth object with client credentials. */ $connection = new TwitterOAuth(CONSUMER_KEY, CONSUMER_SECRET); /* Get temporary credentials. */ $request_token = $connection->getRequestToken(OAUTH_CALLBACK); /* Save temporary credentials to session. */ $_SESSION['oauth_token'] = $token = $request_token['oauth_token']; $_SESSION['oauth_token_secret'] = $request_token['oauth_token_secret']; /* If last connection failed don't display authorization link. */ switch ($connection->http_code) { case 200: /* Build authorize URL and redirect user to Twitter. */ $url = $connection->getAuthorizeURL($token); header('Location: ' . $url); break; default: /* Show notification if something went wrong. */ echo 'Could not connect to Twitter. Refresh the page or try again later.'; } I have enable php session on my ubuntu server, because this code works <?php session_start(); $_SESSION["secretword"] = "hello there"; $secretword = $_SESSION["secretword"] ; ?> <html> <head> <title>A PHP Session Example</title> </head> <body> <?php echo $secretword; ?> </body> </html>

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • Create table and call it from sql

    - by user1770816
    I have a PL/SQL function which creates a new temporary table. For creating the table I use execute immediate. When I run my function in oracle sql developer everything is ok; the function creates the temp table without errors. But when U use SQL: Select function_name from table_name I get an exceptions: ORA-14552: cannot perform a DDL, commit or rollback inside a query or DML ORA-06512: at "SYSTEM.GET_USERS", line 10 14552. 00000 - "cannot perform a DDL, commit or rollback inside a query or DML " *Cause: DDL operations like creation tables, views etc. and transaction control statements such as commit/rollback cannot be performed inside a query or a DML statement. Update Sorry, write from tablet PC and have problems with format text. My function: CREATE OR REPLACE FUNCTION GET_USERS ( USERID IN VARCHAR2 ) RETURN VARCHAR2 AS request VARCHAR2(520) := 'CREATE GLOBAL TEMPORARY TABLE '; BEGIN request := request || 'temp_table_' || userid || '(user_name varchar2(50), user_id varchar2(20), is_administrator varchar2(5)') || ' ON COMMIT PRESERVE ROWS'; EXECUTE IMMEDIATE (request); RETURN 'true'; END GET_USERS;

    Read the article

  • merging two tables, while applying aggregates on the duplicates (max,min and sum)

    - by cloudraven
    I have a table (let's call it log) with a few millions of records. Among the fields I have Id, Count, FirstHit, LastHit. Id - The record id Count - number of times this Id has been reported FirstHit - earliest timestamp with which this Id was reported LastHit - latest timestamp with which this Id was reported This table only has one record for any given Id Everyday I get into another table (let's call it feed) with around half a million records with these fields among many others: Id Timestamp - Entry date and time. This table can have many records for the same id What I want to do is to update log in the following way. Count - log count value, plus the count() of records for that id found in feed FirstHit - the earliest of the current value in log or the minimum value in feed for that id LastHit - the latest of the current value in log or the maximum value in feed for that id. It should be noticed that many of the ids in feed are already in log. The simple thing that worked is to create a temporary table and insert into it the union of both as in Select Id, Min(Timestamp) As FirstHit, MAX(Timestamp) as LastHit, Count(*) as Count FROM feed GROUP BY Id UNION ALL Select Id, FirstHit,LastHit,Count FROM log; From that temporary table I do a select that aggregates Min(firsthit), max(lasthit) and sum(Count) Select Id, Min(FirstHit),Max(LastHit),Sum(Count) FROM @temp GROUP BY Id; and that gives me the end result. I could then delete everything from log and replace it with everything with temp, or craft an update for the common records and insert the new ones. However, I think both are highly inefficient. Is there a more efficient way of doing this. Perhaps doing the update in place in the log table?

    Read the article

  • Is there a set based solution for this problem?

    - by NYSystemsAnalyst
    We have a table set up as follows: |ID|EmployeeID|Date |Category |Hours| |1 |1 |1/1/2010 |Vacation Earned|2.0 | |2 |2 |2/12/2010|Vacation Earned|3.0 | |3 |1 |2/4/2010 |Vacation Used |1.0 | |4 |2 |5/18/2010|Vacation Earned|2.0 | |5 |2 |7/23/2010|Vacation Used |4.0 | The business rules are: Vacation balance is calculated by vacation earned minus vacation used. Vacation used is always applied against the oldest vacation earned amount first. We need to return the rows for Vacation Earned that have not been offset by vacation used. If vacation used has only offset part of a vacation earned record, we need to return that record showing the difference. For example, using the above table, the result set would look like: |ID|EmployeeID|Date |Category |Hours| |1 |1 |1/1/2010 |Vacation Earned|1.0 | |4 |2 |5/18/2010|Vacation Earned|1.0 | Note that record 2 was eliminated because it was completely offset by used time, but records 1 and 4 were only partially used, so they were calculated and returned as such. The only way we have thought of to do this is to get all of the vacation earned records in a temporary table. Then, get the total vacation used and loop through the temporary table, deleting the oldest record and subtracting that value from the total vacation used until the total vacation used is zero. We could clean it up for when the remaining vacation used is only part of the oldest vacation earned record. This would leave us with just the outstanding vacation earned records. This works, but it is very inefficient and performs poorly. Also, the performance will just degrade over time as more and more records are added. Are there any suggestions for a better solution, preferable set based? If not, we'll just have to go with this.

    Read the article

  • ( Sql Server 2005 C#.Net ) - I want just the insert query for a temp table.

    - by John Stephen
    Hi..I am using C#.Net and Sql Server ( Windows Application ). I had created a temporary table. When a button is clicked, temporary table (#tmp_emp_details) is created. I am having another button called "insert Values" and also 5 textboxes. The values that are entered in the textbox are used and whenever com.ExecuteNonQuery(); line comes, it throws an error message called "Invalid object name '#tbl_emp_answer'.". Below is the set of code..Please give me a solution. Code for insert (in insert value button): private void btninsertvalues_Click(object sender, EventArgs e) { username = txtusername.Text; examloginid = txtexamloginid.Text; question = txtquestion.Text; answer = txtanswer.Text; useranswer = txtanswer.Text; SqlConnection con = new SqlConnection("Data Source=.;Initial Catalog=tempdb;Integrated Security=True;"); SqlCommand com = new SqlCommand("Insert into #tbl_emp_answer values('"+username+"','"+examloginid+"','"+question+"','"+answer+"','"+useranswer+"')", con); con.Open(); com.ExecuteNonQuery(); con.Close(); }

    Read the article

  • How to get the recently viewed pictures on the web browser?

    - by quantity
    I want to retrieve the recently viewed pictures from IE. I know that all the files from IE exist in the internet temporary directory, commonly with the path like "C:\Documents and Settings[account]\Local Settings\Temporary Internet Files". Here something strange for me comes. I wrote a program of C++ to retrieve the directory above, and the result says it contains three subdirectories and one file. These subdirectories are Content.IE5, OIS, and OLK145, each contains lots of pictures, which I think are the ones I browsed recently on the web. The only file is desktop.ini, which is not my concern. However, when I open the directory in the file system, there are no subdirectories at all, but a lot of files, different from the ones in the subdirectories retrieved by the program. I have several questions. Frist of all, why the content of the temorary internet files seems different? Which is the actual situation about the directory? Second, I found that in filesystem explorer, the files in the directory seem like some link to the ones on the web, not physically exist on my computer, is this true? Finally, how can I get the pictures viewed from IE recently with C++, as well as their original url?

    Read the article

  • Persisting object changes from child form to parent form based on button press.

    - by Shyran
    I have created a form that is used for both adding and editing a custom object. Which mode the form takes is provided by an enum value passed from the calling code. I also pass in an object of the custom type. All of my controls at data bound to the specific properties of the custom object. When the form is in Add mode, this works great as when the controls are updated with data, the underlying object is as well. However, in Edit mode, I keep two variables of the custom object supplied by the calling code, the original, and a temporary one made through deep copying. The controls are then bound to the temporary copy, this makes it easy to discard the changes if the user clicks the Cancel button. What I want to know is how to persist those changes back to the original object if the user clicks the OK button, since there is now a disconnect because of the deep copying. I am trying to avoid implementing a internal property on the Add/Edit form if I can. Below is an example of my code: public AddEditCustomerDialog(Customer customer, DialogMode mode) { InitializeComponent(); InitializeCustomer(customer, mode); } private void InitializeCustomer(Customer customer, DialogMode mode) { this.customer = customer; if (mode == DialogMode.Edit) { this.Text = "Edit Customer"; this.tempCustomer = ObjectCopyHelper.DeepCopy(this.customer); this.customerListBindingSource.DataSource = this.tempCustomer; this.phoneListBindingSource.DataSource = this.tempCustomer.PhoneList; } else { this.customerListBindingSource.DataSource = this.customer; this.phoneListBindingSource.DataSource = this.customer.PhoneList; } }

    Read the article

  • Preferred way of filling up a C++ vector of structs

    - by henle
    Alternative 1, reusing a temporary variable: Sticker sticker; sticker.x = x + foreground.x; sticker.y = foreground.y; sticker.width = foreground.width; sticker.height = foreground.height; board.push_back(sticker); sticker.x = x + outline.x; sticker.y = outline.y; sticker.width = outline.width; sticker.height = outline.height; board.push_back(sticker); Alternative 2, scoping the temporary variable: { Sticker sticker; sticker.x = x + foreground.x; sticker.y = foreground.y; sticker.width = foreground.width; sticker.height = foreground.height; board.push_back(sticker); } { Sticker sticker; sticker.x = x + outline.x; sticker.y = outline.y; sticker.width = outline.width; sticker.height = outline.height; board.push_back(sticker); } Alternative 3, writing straight to the vector memory: { board.push_back(Sticker()); Sticker &sticker = board.back(); sticker.x = x + foreground.x; sticker.y = foreground.y; sticker.width = foreground.width; sticker.height = foreground.height; } { board.push_back(Sticker()); Sticker &sticker = board.back(); sticker.x = x + outline.x; sticker.y = outline.y; sticker.width = outline.width; sticker.height = outline.height; } Which approach do you prefer?

    Read the article

  • Data mixing SQL Server

    - by Pythonizo
    I have three tables and a range of two dates: Services ServicesClients ServicesClientsDone @StartDate @EndDate Services: ID | Name 1 | Supervisor 2 | Monitor 3 | Manufacturer ServicesClients: IDServiceClient | IDClient | IDService 1 | 1 | 1 2 | 1 | 2 3 | 2 | 2 4 | 2 | 3 ServicesClientsDone: IDServiceClient | Period 1 | 201208 3 | 201210 Period = YYYYMM I need to insert into ServicesClientsDone the months range from @StartDate up @EndDate. I have also a temporary table (#Periods) with the following list: Period 201208 201209 201210 The query I need is to give me back the following list: IDServiceClient | Period 1 | 201209 1 | 201210 2 | 201208 2 | 201209 2 | 201210 3 | 201208 3 | 201209 4 | 201208 4 | 201209 4 | 201210 Which are client services but the ranks of the temporary table, not those who are already inserted This is what i have: Table periods: DECLARE @i int DECLARE @mm int DECLARE @yyyy int, DECLARE @StartDate datetime DECLARE @EndDate datetime set @EndDate = (SELECT GETDATE()) set @StartDate = (SELECT DATEADD(MONTH, -3,GETDATE())) CREATE TABLE #Periods (Period int) set @i = 0 WHILE @i <= DATEDIFF(MONTH, @StartDate , @EndDate ) BEGIN SET @mm= DATEPART(MONTH, DATEADD(MONTH, @i, @FechaInicio)) SET @yyyy= DATEPART(YEAR, DATEADD(MONTH, @i, @FechaInicio)) INSERT INTO #Periods (Period) VALUES (CAST(@yyyy as varchar(4)) + RIGHT('00'+CONVERT(varchar(6), @mm), 2)) SET @i = @i + 1; END Relation between ServicesClients and Services: SELECT s.Name, sc.IDClient FROM Services JOIN ServicesClients AS sc ON sc.IDService = s.ID Services already done and when: SELECT s.Name, scd.Period FROM Services JOIN ServicesClients AS sc ON sc.IDService = s.ID JOIN ServicesClientsDone AS scd ON scd.IDServiceClient = sc.IDServiceClient

    Read the article

  • PHP function to handle most database queries has a problem with results. I am getting the right numb

    - by asdasds
    Here is my little function. It does not handle the results correctly. I do get all the rows that I want, but all the rows of the $results array contain the exact same values. So i make 2 arrays, a temporary array to hold the values after each fetch, and another array to hold all the temporary arrays. First i take the temp array and map its keys to the column names. Then i give it to bind_result, and call fetch() and use it like I would any other result value. Could this be because I re-use the $results array? numresults is the number of values you are taking from each row. if 0, you are not getting any results back. function db_query($db, $query, $params = NULL, $numresults = 0) { if($stmt = $db -> prepare($query)) { if($params != NULL) { call_user_func_array(array($stmt, 'bind_param'), $params); } if(!$stmt -> execute()) { //echo 'exec error:',$db->error; return false; } if($numresults > 0) { $results = array(); $tmpresult = array(); $meta = $stmt->result_metadata(); while ($columnName = $meta->fetch_field()) $tmpresult[] = &$results[$columnName->name]; call_user_func_array(array($stmt, 'bind_result'), $tmpresult); $meta->close(); $results = array(); while($stmt -> fetch()) $results[] = $tmpresult; } $stmt -> close(); } else { //echo 'prepare error: ',$db->error; return false; } if($numresults == 0) return true; return $results; }

    Read the article

  • Powershell wait for file to delete, then copy a folder

    - by user3317623
    Morning guys, I have a couple of scripts that have to sync a folder from the network server, to the local terminal server, and lastly into the %LOCALAPPDATA%. I need to first check if a folder is being synced (this is done by creating a temporary COPYING.TXT on the server), and wait until that is removed, THEN copy to %LOCALAPPDATA%. Something like this: Server-side script executes, which syncs my folder to all of my terminal servers. It creates a COPYING.TXT temporary file, which indicates the sync is in progress. Once the sync is finished, the script removes the COPYING.TXT If someone logs on during the sync, I need a script to wait until the COPYING.TXT is deleted I.E the sync is finished, then resume the local sync into their %LOCALAPPDATA%. do{cp c:\folder\program $env:LOCALAPPDATA} while(!(test-path c:\folder\COPYING.txt)) (So that copies the folder while the file DOESN'T exist, but I don't think that exits cleanly) I cannot format the above as code for some reason I'm sorry? Or: while(!(test-path c:\folder\COPYING.txt)){ cp c:\folder\program $env:LOCALAPPDATA\ -recurse -force if (!(test-path c:\folder\program)){return} } But that script quits if the COPYING.TXT exists. I think I need to create a function and insert that function within itself, or a nested while loop, but that is starting to make my head hurt. Any help would be greatly appreciated. Thanks guys.

    Read the article

  • bubble sort logic error

    - by Arianule
    I was trying a basic sorting exercise and I was hoping I could receive some help with what is probably a basic logic error. int[] numbers = new int[] { 2, 5, 11, 38, 24, 6, 9, 0, 83, 7 }; for (int loop = 0; loop < numbers.Length; loop++) { Console.WriteLine(numbers[loop]); } Console.WriteLine("Performing a bubble sort"); bool flag = false; do { for (int loop = 0; loop < numbers.Length - 1; loop++) { if (numbers[loop] > numbers[loop + 1]) { int temporary = numbers[loop]; numbers[loop] = numbers[loop + 1]; numbers[loop + 1] = temporary; flag = true; } } } while (flag == false); for (int loop = 0; loop < numbers.Length; loop++) { Console.WriteLine(numbers[loop]); } kind regards arianule

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >