Search Results

Search found 242 results on 10 pages for 'calvin tiger'.

Page 2/10 | < Previous Page | 1 2 3 4 5 6 7 8 9 10  | Next Page >

  • How do I get GNU screen not to start in my home directory in OS X?

    - by Benjamin Oakes
    GNU Screen (screen) behaves differently on OS X 10.5 (Leopard) and 10.6 (Snow Leopard) compared to Linux (at least Ubuntu, Red Hat, and Gentoo) and OS X 10.4 (Tiger). In 10.5 and 10.6, new screens (made with screen or ^A c) always places me in my home directory ~. In Linux and OS X Tiger, new screens have a pwd of wherever the screen was created originally. Made up examples to illustrate what I mean: Tiger: $ cd ~/foo $ pwd /Users/ben/foo $ screen $ pwd /Users/ben/foo $ screen # or ^A c $ pwd /Users/ben/foo Leopard, Snow Leopard: $ cd ~/foo $ pwd /Users/ben/foo $ screen $ pwd /Users/ben $ screen # or ^A c $ pwd /Users/ben How do I get Leopard and Snow Leopard to behave like Tiger used to?

    Read the article

  • Spaces around all hyphens in a string without double-up

    - by Dave
    I'm after a regex that puts spaces around each "-" in a string, eg. 02 jaguar-leopard, tiger-panther 08 would become 02 jaguar - leopard, tiger - panther 08 Note that if the "-" already has spaces around it, no changes are to be made, eg. 02 jaguar - leopard, tiger - panther 08 should not become 02 jaguar - leopard, tiger - panther 08 The number of hyphens are unknown in advance. Thanks for any ideas... Edit: I'm not actually using a language for this. I'm using Ant Renamer (a mass file renaming utility). There are two fields in the renamer GUI, "Expression" and "New name" to provide inputs. This is from the help file as an example: Swapping artist and title from mp3 file names: "Expression" = (.*) - (.*)\.mp3 "New name" = $2 - $1.mp3 Extract episode number and title from series video files with episode number as SnnEmm followed by title: "Expression" = Code\.Quantum\.S([0-9]{2})E([0-9]{2})\.(.*)\.FRENCH.XViD\.avi "New name" = Code Quantum - $1$2 - $3.avi

    Read the article

  • Ubuntu and racadm

    - by lmqcn
    I recently purchased a used poweredge 1850 server and it came with a DRAC card. After wiping the HDD and installing ubuntu server 12.04.3 LTS amd64 on it, I am now trying to gain access to the DRAC which I believe is version 4. I have properly configured the DRAC to use it's own IP on my LAN and when I point my browser to the IP address, I am greeted with the DRAC login page (it has the dell logo and everything). However, after trying the credentials of root/calvin, I was denied access. So I think that the previous owners had set their own password. After doing some reading, it appears that I can reset the credentials to the default using racadm config -g cfgUserAdmin -o cfgUserAdminPassword -i 1 newpassword but upon entering the command, I get this error: bash: /usr/sbin/racadm: No such file or directory This holds true even if I run sudo su prior to running the racadm command. If, however, I run sudo racadm config -g cfgUserAdmin -o cfgUserAdminPassword -i 1 newpassword there are no errors. Yet, when I try to log into the DRAC via the web interface using the credentials of root/newpassword I am still not granted access. I installed the dell utilities via the guide at https://wiki.ubuntu.com/HardwareSupportMachinesServersDellNotes. I first tried to install the 64 bit version that is on the dell repositories, but after that was unsuccessful, I just followed the guide verbatim. No errors were produced in either case. I even followed the information at the bottom of the guide by executing sudo pppd /dev/ttyS1 1382400 crtscts noipdefault noauth lock persist connect 'chat -v "" CLIENT CLIENTSERVER "\\c"' but obviously, replacing the /dev/ttyS1 with the correct information for my system. ls -l /usr/sbin/ | grep racadm yields -rwxr-xr-x 1 root root 87930 Sep 16 04:03 racadm I have tried these credentials after each attempt of changing the password: root/calvin root/newpassword admin/calvin admin/newpassword All have been unsuccessful. What is the next course of action that I should take?

    Read the article

  • problem with customized destination path

    - by calvin
    during installation i'm giving an option to user to change the default installation dir. Once this is done, installation goes fine and installs in the user specific location. But when i use INSTALLDIR to modify few registry entries, INSTALLDIR is having old, default values but not the user specified. This is happening to only one package and i couldn't find the reason. Any help will save my time. i use installshield 12 on win2k3 x86. ~calvin

    Read the article

  • Why there is no open source framework (like Java) for C# application development?

    - by Calvin
    Hi, C# is much more popular than Java in recent years. As a general-purpose programming language, many people feel that C# is better designed than Java. Why until now there is no open source framework for C# application development? Why no one take the initiative to develop a open source framework for C# which is comparable to Java? (Many people say Mono is not a mature framework and should not be used in serious application development.) Calvin

    Read the article

  • How to prevent ubuntu from connecting to wifi hotspots automatically

    - by calvin tiger
    Note: this question is distinct from "How to disable automatically connecting to WiFi?", as I do not wish to disable automatic WiFi connection in general. Problem: The Ubuntu WiFi module connects automatically in priority with WiFi networks without a password, even if there is a already known password-protected WiFi network nearby. Worse, most of the times these "unprotected" networks are in fact hotspots that require authentification from the browser. Example: I am at home, and most of the times my Ubuntu laptop will connect by itself to a nearby hotspot instead of choosing my local ADSL box (password-protected, with a password that is already known by the computer). I then have to select my own WiFi network manually. Is there a way to disable automatic connection to /all/ hotspots ?

    Read the article

  • Updating the jump in game

    - by Luka Tiger
    I am making a Java game and I want my game to run the same on any FPS so I'm using time delta between each update. This is the update method of the Player: public void update(long timeDelta) { //speed is the movement speed of a player on X axis //timeDelta is expressed in nano seconds so I'm dividing it with 1000000000 to express it in seconds if (Input.keyDown(37)) this.velocityX -= speed * (timeDelta / 1000000000.0); if (Input.keyDown(39)) this.velocityX += speed * (timeDelta / 1000000000.0); if(Input.keyPressed(38)) { this.velocityY -= 6; } velocityY += g * (timeDelta/1000000000.0); //applying gravity move(velocityX, velocityY); /*this is method which moves a player according to velocityX and velocityY, and checking the collision */ this.velocityX = 0.0; } The strange thing is that when I have unlimited FPS (and update number) my player is jumping about 10 blocks. It jumps even higher when the FPS is increasing. If I limit FPS it is jumping 4 blocks. (BLOCK: 32x32) I have just realized that the problem is this: if(Input.keyPressed(38)) { this.velocityY -= 6; } I add -6 to velocityY which increases player's Y proportionally to the update number and not to the time. But I don't know how to fix this.

    Read the article

  • If and else condition not working properly in xna [closed]

    - by user1090751
    I am developing chess like game and i wanted to show error message if user try to place any player inside the box which is not empty. For example in certain place if there is empty then the object(2d object) is placed else it should show error message. However in my program it is showing message everytime i.e when i place object on empty place then also it is showing error message. Please see the below code: protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); // TODO: Add your update logic here for (int i = 0; i < 25; i++) { MouseState mouseState; mouseDiBack = false; mouseState = Mouse.GetState(); if (new Rectangle(mouseState.X, mouseState.Y, 1, 1).Intersects(rect_arr[i])) { background_color_arr[i] = Color.Red; } else { background_color_arr[i] = Color.White; } if (new Rectangle(mouseState.X, mouseState.Y, 1, 1).Intersects(rect_arr[i]) && (mouseState.LeftButton == ButtonState.Pressed)) { if (boxes[i] != "goat" && boxes[i] != "tiger") { place = i; if (turn == "goat") { boxes[i] = "goat"; turn = "tiger"; } else { boxes[i] = "tiger"; turn = "goat"; } } else { errMsg = "This " + i + " block is not empty to place " + turn + ". Please select empty block!!"; } } } base.Update(gameTime); }

    Read the article

  • ASP.NET Validator Controls Slowing Down Page

    - by Calvin Nguyen
    Hi all, I have an UpdatePanel that has user controls dynamically added to it. There can be a few dozen user controls at times. The page / UpdatePanel slows down big time on each postback as more user controls are added. After some digging, I was surprised to find the cause is the various CompareValidator, CustomValidator, RegularExpressionValidator and RequiredFieldValidator controls that exist on each user control. Dose anyone have suggestions? It strikes me as very peculiar that inclusion of these ASP.NET controls could have such a horrible effect on performance. Thanks, Calvin

    Read the article

  • Problem with MIB_IFTABLE & MIB_IFROW

    - by calvin
    Hello there, Im using MIB_IFTABLE & MIB_IFROW to get the no.of bytes transmitted & received. Everything looks fine, and values are correct in case of using WIFI alone. But when i use vpn connection over wifi these values are not correct for wifi adapter, interestingly i'm getting correct values for vpn adapter. And if i see status of any of these adapters(right click and see status), all these values given by windows are correct. here are my questions. 1) is there any other way of getting no.of bytes transferred over adapter? 2) is there a way to tell whether adapter is vpn or not? (as all vpn adapters are shows as Local Area Connection) 3) If you are connected to VPN, is there a possibility of bytes getting transferred through other than this vpn adapter?(forget about split tunneling, consider very simple case) i use vs2005, win7, Im new to these protocols, thanks for any help. -calvin

    Read the article

  • Cannot ping static ip on eth1

    - by Calvin Froedge
    I am trying to ping the network interface I have set up for eth1. This is my config: auto eth1 iface eth1 inet static address 192.168.1.2 netmask 255.255.255.0 gateway 192.168.1.1 broadcast 192.168.1.255 If I ping 192.168.1.2, I get: Ping 192.168.1.2 (192.168.1.2) 56(84) bytes of data. From 192.168.1.3 icmp_seq=3 Destination Host Unreachable Results of ifconfig tell me that the IPv4 address is 192.168.1.3. I can ping this ip. Bcast and Mask are as expected (same as in definition). I can ping 192.168.1.3 from my macbook. I cannot ping 192.168.1.2 locally or from my macbook. Any ideas why?

    Read the article

  • Performance Enhancement in Full-Text Search Query

    - by Calvin Sun
    Ever since its first release, we are continuing consolidating and developing InnoDB Full-Text Search feature. There is one recent improvement that worth blogging about. It is an effort with MySQL Optimizer team that simplifies some common queries’ Query Plans and dramatically shorted the query time. I will describe the issue, our solution and the end result by some performance numbers to demonstrate our efforts in continuing enhancement the Full-Text Search capability. The Issue: As we had discussed in previous Blogs, InnoDB implements Full-Text index as reversed auxiliary tables. The query once parsed will be reinterpreted into several queries into related auxiliary tables and then results are merged and consolidated to come up with the final result. So at the end of the query, we’ll have all matching records on hand, sorted by their ranking or by their Doc IDs. Unfortunately, MySQL’s optimizer and query processing had been initially designed for MyISAM Full-Text index, and sometimes did not fully utilize the complete result package from InnoDB. Here are a couple examples: Case 1: Query result ordered by Rank with only top N results: mysql> SELECT FTS_DOC_ID, MATCH (title, body) AGAINST ('database') AS SCORE FROM articles ORDER BY score DESC LIMIT 1; In this query, user tries to retrieve a single record with highest ranking. It should have a quick answer once we have all the matching documents on hand, especially if there are ranked. However, before this change, MySQL would almost retrieve rankings for almost every row in the table, sort them and them come with the top rank result. This whole retrieve and sort is quite unnecessary given the InnoDB already have the answer. In a real life case, user could have millions of rows, so in the old scheme, it would retrieve millions of rows' ranking and sort them, even if our FTS already found there are two 3 matched rows. Apparently, the million ranking retrieve is done in vain. In above case, it should just ask for 3 matched rows' ranking, all other rows' ranking are 0. If it want the top ranking, then it can just get the first record from our already sorted result. Case 2: Select Count(*) on matching records: mysql> SELECT COUNT(*) FROM articles WHERE MATCH (title,body) AGAINST ('database' IN NATURAL LANGUAGE MODE); In this case, InnoDB search can find matching rows quickly and will have all matching rows. However, before our change, in the old scheme, every row in the table was requested by MySQL one by one, just to check whether its ranking is larger than 0, and later comes up a count. In fact, there is no need for MySQL to fetch all rows, instead InnoDB already had all the matching records. The only thing need is to call an InnoDB API to retrieve the count The difference can be huge. Following query output shows how big the difference can be: mysql> select count(*) from searchindex_inno where match(si_title, si_text) against ('people')  +----------+ | count(*) | +----------+ | 666877 | +----------+ 1 row in set (16 min 17.37 sec) So the query took almost 16 minutes. Let’s see how long the InnoDB can come up the result. In InnoDB, you can obtain extra diagnostic printout by turning on “innodb_ft_enable_diag_print”, this will print out extra query info: Error log: keynr=2, 'people' NL search Total docs: 10954826 Total words: 0 UNION: Searching: 'people' Processing time: 2 secs: row(s) 666877: error: 10 ft_init() ft_init_ext() keynr=2, 'people' NL search Total docs: 10954826 Total words: 0 UNION: Searching: 'people' Processing time: 3 secs: row(s) 666877: error: 10 Output shows it only took InnoDB only 3 seconds to get the result, while the whole query took 16 minutes to finish. So large amount of time has been wasted on the un-needed row fetching. The Solution: The solution is obvious. MySQL can skip some of its steps, optimize its plan and obtain useful information directly from InnoDB. Some of savings from doing this include: 1) Avoid redundant sorting. Since InnoDB already sorted the result according to ranking. MySQL Query Processing layer does not need to sort to get top matching results. 2) Avoid row by row fetching to get the matching count. InnoDB provides all the matching records. All those not in the result list should all have ranking of 0, and no need to be retrieved. And InnoDB has a count of total matching records on hand. No need to recount. 3) Covered index scan. InnoDB results always contains the matching records' Document ID and their ranking. So if only the Document ID and ranking is needed, there is no need to go to user table to fetch the record itself. 4) Narrow the search result early, reduce the user table access. If the user wants to get top N matching records, we do not need to fetch all matching records from user table. We should be able to first select TOP N matching DOC IDs, and then only fetch corresponding records with these Doc IDs. Performance Results and comparison with MyISAM The result by this change is very obvious. I includes six testing result performed by Alexander Rubin just to demonstrate how fast the InnoDB query now becomes when comparing MyISAM Full-Text Search. These tests are base on the English Wikipedia data of 5.4 Million rows and approximately 16G table. The test was performed on a machine with 1 CPU Dual Core, SSD drive, 8G of RAM and InnoDB_buffer_pool is set to 8 GB. Table 1: SELECT with LIMIT CLAUSE mysql> SELECT si_title, match(si_title, si_text) against('family') as rel FROM si WHERE match(si_title, si_text) against('family') ORDER BY rel desc LIMIT 10; InnoDB MyISAM Times Faster Time for the query 1.63 sec 3 min 26.31 sec 127 You can see for this particular query (retrieve top 10 records), InnoDB Full-Text Search is now approximately 127 times faster than MyISAM. Table 2: SELECT COUNT QUERY mysql>select count(*) from si where match(si_title, si_text) against('family‘); +----------+ | count(*) | +----------+ | 293955 | +----------+ InnoDB MyISAM Times Faster Time for the query 1.35 sec 28 min 59.59 sec 1289 In this particular case, where there are 293k matching results, InnoDB took only 1.35 second to get all of them, while take MyISAM almost half an hour, that is about 1289 times faster!. Table 3: SELECT ID with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, match(si_title, si_text) against(<TERM>) as rel FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.5 sec 5.05 sec 10.1 family film 0.95 sec 25.39 sec 26.7 Pizza restaurant orange county California 0.93 sec 32.03 sec 34.4 President united states of America 2.5 sec 36.98 sec 14.8 Table 4: SELECT title and text with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, si_title, si_text, ... as rel FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.61 sec 41.65 sec 68.3 family film 1.15 sec 47.17 sec 41.0 Pizza restaurant orange county california 1.03 sec 48.2 sec 46.8 President united states of america 2.49 sec 44.61 sec 17.9 Table 5: SELECT ID with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, match(si_title, si_text) against(<TERM>) as rel  FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.5 sec 5.05 sec 10.1 family film 0.95 sec 25.39 sec 26.7 Pizza restaurant orange county califormia 0.93 sec 32.03 sec 34.4 President united states of america 2.5 sec 36.98 sec 14.8 Table 6: SELECT COUNT(*) mysql> SELECT count(*) FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.47 sec 82 sec 174.5 family film 0.83 sec 131 sec 157.8 Pizza restaurant orange county califormia 0.74 sec 106 sec 143.2 President united states of america 1.96 sec 220 sec 112.2  Again, table 3 to table 6 all showing InnoDB consistently outperform MyISAM in these queries by a large margin. It becomes obvious the InnoDB has great advantage over MyISAM in handling large data search. Summary: These results demonstrate the great performance we could achieve by making MySQL optimizer and InnoDB Full-Text Search more tightly coupled. I think there are still many cases that InnoDB’s result info have not been fully taken advantage of, which means we still have great room to improve. And we will continuously explore the area, and get more dramatic results for InnoDB full-text searches. Jimmy Yang, September 29, 2012

    Read the article

  • Setting up shared connection

    - by Calvin Froedge
    I have a network that is connected to the internet via a switch connected to a router. I have it setup like this so I can work on the new network without causing problems on the old. Anyway, I'm trying to enable internet connection sharing. Internet comes to server like this: Modem - Router - Switch - Ubuntu 11.10 (Eth0) I want to share the connection through Eth1 (Eth1 - Managed Switch - Clients). Here is my config for /etc/network/interfaces: I have a DHCP server running on Eth1. Here is my config: ddns-update-style none; option domain-name "myserver.local"; option domain-name-servers 192.168.1.2, 8.8.8.8; default-lease-time 600; max-lease-time 7200; authoritative; subnet 192.168.1.0 netmask 255.255.255.0 { interface eth1; range 192.168.1.3 192.168.1.254; option routers 192.168.1.1; option subnet-mask 255.255.255.0; option broadcast-address 192.168.1.255; } Here is /etc/network/interfaces: # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet dhcp #Used for internal network auto eth1 iface eth1 inet static address 192.168.1.2 netmask 255.255.255.0 broadcast 192.168.1.255 network 192.168.1.0 Here is /etc/hosts: 127.0.0.1 localhost 127.0.1.1 myserver.isp.com server 192.168.1.2 server.myserver.local server myserver.local In /etc/sysctl.conf, I've set the following: net.ipv4.ip_forward=1 Finally, in /etc/rc.local, I've set the following: /sbin/iptables -P FORWARD ACCEPT /sbin/iptables --table nat -A POSTROUTING -o eth1 -j MASQUERADE When I ping 8.8.8.8 (google's DNS) from a client that is authenticated with my DHCP server (they have been assigned a local ip, like 192.168.1.10), I get a timeout. How can I debug this further to figure out where my problem is?

    Read the article

  • Ubuntu 12.10 stucks in a Login Loop

    - by Calvin Wahlers
    My problem: As you can guess my Ubuntu 12.10 stucks in a login loop when trying to enter my desktop. Means the screen gets black and soon after that the login screen comes back. I'm a Ubuntu Newbie so if there's any answer please explain in a simply understandable language :) I've already read that the problem might be caused by an error depending on the graphics, so I post my graphics to: My graphics: ATI Radeon 7670M Hope you can help me, thank you ;)

    Read the article

  • New Enhancements for InnoDB Memcached

    - by Calvin Sun
    In MySQL 5.6, we continued our development on InnoDB Memcached and completed a few widely desirable features that make InnoDB Memcached a competitive feature in more scenario. Notablely, they are 1) Support multiple table mapping 2) Added background thread to auto-commit long running transactions 3) Enhancement in binlog performance  Let’s go over each of these features one by one. And in the last section, we will go over a couple of internally performed performance tests. Support multiple table mapping In our earlier release, all InnoDB Memcached operations are mapped to a single InnoDB table. In the real life, user might want to use this InnoDB Memcached features on different tables. Thus being able to support access to different table at run time, and having different mapping for different connections becomes a very desirable feature. And in this GA release, we allow user just be able to do both. We will discuss the key concepts and key steps in using this feature. 1) "mapping name" in the "get" and "set" command In order to allow InnoDB Memcached map to a new table, the user (DBA) would still require to "pre-register" table(s) in InnoDB Memcached “containers” table (there is security consideration for this requirement). If you would like to know about “containers” table, please refer to my earlier blogs in blogs.innodb.com. Once registered, the InnoDB Memcached will then be able to look for such table when they are referred. Each of such registered table will have a unique "registration name" (or mapping_name) corresponding to the “name” field in the “containers” table.. To access these tables, user will include such "registration name" in their get or set commands, in the form of "get @@new_mapping_name.key", prefix "@@" is required for signaling a mapped table change. The key and the "mapping name" are separated by a configurable delimiter, by default, it is ".". So the syntax is: get [@@mapping_name.]key_name set [@@mapping_name.]key_name  or  get @@mapping_name set @@mapping_name Here is an example: Let's set up three tables in the "containers" table: The first is a map to InnoDB table "test/demo_test" table with mapping name "setup_1" INSERT INTO containers VALUES ("setup_1", "test", "demo_test", "c1", "c2", "c3", "c4", "c5", "PRIMARY");  Similarly, we set up table mappings for table "test/new_demo" with name "setup_2" and that to table "mydatabase/my_demo" with name "setup_3": INSERT INTO containers VALUES ("setup_2", "test", "new_demo", "c1", "c2", "c3", "c4", "c5", "secondary_index_x"); INSERT INTO containers VALUES ("setup_3", "my_database", "my_demo", "c1", "c2", "c3", "c4", "c5", "idx"); To switch to table "my_database/my_demo", and get the value corresponding to “key_a”, user will do: get @@setup_3.key_a (this will also output the value that corresponding to key "key_a" or simply get @@setup_3 Once this is done, this connection will switch to "my_database/my_demo" table until another table mapping switch is requested. so it can continue issue regular command like: get key_b  set key_c 0 0 7 These DMLs will all be directed to "my_database/my_demo" table. And this also implies that different connections can have different bindings (to different table). 2) Delimiter: For the delimiter "." that separates the "mapping name" and key value, we also added a configure option in the "config_options" system table with name of "table_map_delimiter": INSERT INTO config_options VALUES("table_map_delimiter", "."); So if user wants to change to a different delimiter, they can change it in the config_option table. 3) Default mapping: Once we have multiple table mapping, there should be always a "default" map setting. For this, we decided if there exists a mapping name of "default", then this will be chosen as default mapping. Otherwise, the first row of the containers table will chosen as default setting. Please note, user tables can be repeated in the "containers" table (for example, user wants to access different columns of the table in different settings), as long as they are using different mapping/configure names in the first column, which is enforced by a unique index. 4) bind command In addition, we also extend the protocol and added a bind command, its usage is fairly straightforward. To switch to "setup_3" mapping above, you simply issue: bind setup_3 This will switch this connection's InnoDB table to "my_database/my_demo" In summary, with this feature, you now can direct access to difference tables with difference session. And even a single connection, you can query into difference tables. Background thread to auto-commit long running transactions This is a feature related to the “batch” concept we discussed in earlier blogs. This “batch” feature allows us batch the read and write operations, and commit them only after certain calls. The “batch” size is controlled by the configure parameter “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size”. This could significantly boost performance. However, it also comes with some disadvantages, for example, you will not be able to view “uncommitted” operations from SQL end unless you set transaction isolation level to read_uncommitted, and in addition, this will held certain row locks for extend period of time that might reduce the concurrency. To deal with this, we introduce a background thread that “auto-commits” the transaction if they are idle for certain amount of time (default is 5 seconds). The background thread will wake up every second and loop through every “connections” opened by Memcached, and check for idle transactions. And if such transaction is idle longer than certain limit and not being used, it will commit such transactions. This limit is configurable by change “innodb_api_bk_commit_interval”. Its default value is 5 seconds, and minimum is 1 second, and maximum is 1073741824 seconds. With the help of such background thread, you will not need to worry about long running uncommitted transactions when set daemon_memcached_w_batch_size and daemon_memcached_r_batch_size to a large number. This also reduces the number of locks that could be held due to long running transactions, and thus further increase the concurrency. Enhancement in binlog performance As you might all know, binlog operation is not done by InnoDB storage engine, rather it is handled in the MySQL layer. In order to support binlog operation through InnoDB Memcached, we would have to artificially create some MySQL constructs in order to access binlog handler APIs. In previous lab release, for simplicity consideration, we open and destroy these MySQL constructs (such as THD) for each operations. This required us to set the “batch” size always to 1 when binlog is on, no matter what “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size” are configured to. This put a big restriction on our capability to scale, and also there are quite a bit overhead in creating destroying such constructs that bogs the performance down. With this release, we made necessary change that would keep MySQL constructs as long as they are valid for a particular connection. So there will not be repeated and redundant open and close (table) calls. And now even with binlog option is enabled (with innodb_api_enable_binlog,), we still can batch the transactions with daemon_memcached_w_batch_size and daemon_memcached_r_batch_size, thus scale the write/read performance. Although there are still overheads that makes InnoDB Memcached cannot perform as fast as when binlog is turned off. It is much better off comparing to previous release. And we are continuing optimize the solution is this area to improve the performance as much as possible. Performance Study: Amerandra of our System QA team have conducted some performance studies on queries through our InnoDB Memcached connection and plain SQL end. And it shows some interesting results. The test is conducted on a “Linux 2.6.32-300.7.1.el6uek.x86_64 ix86 (64)” machine with 16 GB Memory, Intel Xeon 2.0 GHz CPU X86_64 2 CPUs- 4 Core Each, 2 RAID DISKS (1027 GB,733.9GB). Results are described in following tables: Table 1: Performance comparison on Set operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8*** 5.6.7-RC* X faster Set (QPS) Set** 8 30,000 5,600 5.36 32 59,000 13,000 4.54 128 68,000 8,000 8.50 512 63,000 6.800 9.23 * mysql-5.6.7-rc-linux2.6-x86_64 ** The “set” operation when implemented in InnoDB Memcached involves a couple of DMLs: it first query the table to see whether the “key” exists, if it does not, the new key/value pair will be inserted. If it does exist, the “value” field of matching row (by key) will be updated. So when used in above query, it is a precompiled store procedure, and query will just execute such procedures. *** added “–daemon_memcached_option=-t8” (default is 4 threads) So we can see with this “set” query, InnoDB Memcached can run 4.5 to 9 time faster than MySQL server. Table 2: Performance comparison on Get operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8 5.6.7-RC* X faster Get (QPS) Get 8 42,000 27,000 1.56 32 101,000 55.000 1.83 128 117,000 52,000 2.25 512 109,000 52,000 2.10 With the “get” query (or the select query), memcached performs 1.5 to 2 times faster than normal SQL. Summary: In summary, we added several much-desired features to InnoDB Memcached in this release, allowing user to operate on different tables with this Memcached interface. We also now provide a background commit thread to commit long running idle transactions, thus allow user to configure large batch write/read without worrying about large number of rows held or not being able to see (uncommit) data. We also greatly enhanced the performance when Binlog is enabled. We will continue making efforts in both performance enhancement and functionality areas to make InnoDB Memcached a good demo case for our InnoDB APIs. Jimmy Yang, September 29, 2012

    Read the article

  • Is it better to load up a class with methods or extend member functionality in a local subclass?

    - by Calvin Fisher
    Which is better? Class #1: public class SearchClass { public SearchClass (string ProgramName) { /* Searches LocalFile objects, handles exceptions, and puts results into m_Results. */ } DateTime TimeExecuted; bool OperationSuccessful; protected List<LocalFile> m_Results; public ReadOnlyCollection<LocalFile> Results { get { return new ReadOnlyCollection<LocalFile>(m_Results); } } #region Results Filters public DateTime OldestFileModified { get { /* Does what it says. */ } } public ReadOnlyCollection<LocalFile> ResultsWithoutProcessFiles() { return new ReadOnlyCollection<LocalFile> ((from x in m_Results where x.FileTypeID != FileTypeIDs.ProcessFile select x).ToList()); } #endregion } Or class #2: public class SearchClass { public SearchClass (string ProgramName) { /* Searches LocalFile objects, handles exceptions, and puts results into m_Results. */ } DateTime TimeExecuted; bool OperationSuccessful; protected List<LocalFile> m_Results; public ReadOnlyCollection<LocalFile> Results { get { return new ReadOnlyCollection<LocalFile>(m_Results); } } public class SearchResults : ReadOnlyCollection<LocalFile> { public SearchResults(IList<LocalFile> iList) : base(iList) { } #region Results Filters public DateTime OldestFileModified { get { /* Does what it says. */ } } public ReadOnlyCollection<LocalFile> ResultsWithoutProcessFiles() { return new ReadOnlyCollection<LocalFile> ((from x in this where x.FileTypeID != FileTypeIDs.ProcessFile select x).ToList()); } #endregion } } ...with the implication that OperationSuccessful is accompanied by a number of more interesting properties on how the operation went, and OldestFileModified and ResultsWithoutProcessFiles() also have several more siblings in the Results Filters section.

    Read the article

  • Object Oriented programming on 8-bit MCU Case Study

    - by Calvin Grier
    I see that there's a lot of questions related to OO Programming here. I'm actually trying to find a specific resource related to embedded OO approaches for an 8 bit MCU. Several years back (maybe 6) I was looking for material related to Object Oriented programming for resource constrained 8051 microprocessors. I found an article/website with a case history of a design group that used a very small RAM part, and implemented many Object based constructs during their C design and development. I believe it was an 8051. The project was a success, and managed to stay inside the very small ROM/RAM they had available. I'm attempting to find it again, but Google can't locate it. The article was well written, and recommended a "mixed" approach using C methods for inheritance and encapsulation - if I recall correctly. Can anyone help me locate this article?

    Read the article

  • Insurers Pushed to Transform Their Business

    - by Calvin Glenn
    Everyone in the P&C industry has heard it “We can’t do it.” “Nobody wants to do it.” “We can’t afford to do it.”  Unfortunately, what they’re referencing are the reasons many insurers are still trying to maintain their business processing on legacy policy administration systems, attempting to bide time until there is no other recourse but to give in, bite the bullet, and take on the monumental task of replacing an entire policy administration system (PAS). Just the thought of that project sends IT, Business Users and Management reeling. However, is that fear real?  It is a bit daunting when one realizes that a complete policy administration system replacement will touch most every function an insurer manages, from quoting and rating, to underwriting, distribution, and even customer service. With that, everyone has heard at least one horror story around a transformation initiative that has far exceeded budget and the promised implementation / go-live timeline.    But, does it have to be that hard?  Surely, in the age where a person can voice-activate their DVR to record a TV program from a cell phone, there has to be someone somewhere who’s figured out how to simplify this process. To be able to help insurers, of all sizes, transform and grow their business while also delivering on their overall objectives of providing speed to market, straight-through-processing for applications, quoting, underwriting, and simplified product development. Maybe we’re looking too hard and the answer is simple and straight-forward. Why replace the entire machine when all it really needs is a new part…a single enterprise rating system? This core, modular piece of the policy administration system is the foundation of product development and rate management that enables insurers to provide the right product at the right price to the right customer through the best channels at any given moment in time. The real benefit of a single enterprise rating system is the ability to deliver enhanced business capabilities, such as improved product management, streamlined underwriting, and speed to market. With these benefits, carriers have accomplished a portion of their overall transformation goal. Furthermore, lessons learned from the rating project can be applied to the bigger, down-the-road PAS project to support the successful completion of the overall transformation endeavor. At the recent Oracle OpenWorld Conference in San Francisco, information was shared with attendees about a recent “go-live” project from an Oracle Insurance Tier 1 insurer who did what is proposed above…replaced just the rating portion of their legacy policy administration system with Oracle Insurance Insbridge Rating and Underwriting.  This change provided the insurer greater flexibility to set rates that better reflect risk while enabling the company to support its market segment strategy. Using the Oracle Insurance Insbridge enterprise rating solution, the insurer was able to reduce processing time for agents and underwriters, gained the ability to support proprietary rating models and improved pricing accuracy.      There is mounting pressure on P&C insurers to produce growth and show net profitability in the midst of modest overall industry growth, large weather-related losses and intensifying competition for market share.  Insurers are also being asked to improve customer service, offer a differentiated value proposition and simplify insurance processes.  While the demands are many there is an easy answer…invest in and update the most mission critical application in your arsenal, the single enterprise rating system. Download the Podcast to listen to “Stand-Alone Rating Engine - Leading Force Behind Core Transformation Projects in the P&C Market,” a podcast originally recorded in October 2013. Related Resources: White Paper: Stand-Alone Rating Engine: Leading Force Behind Core Transformation Projects in the P&C Market Webcast On Demand: Stand-Alone Rating Engine and Core Transformation for P&C Insurers Don’t forget to keep up with us year-round: Facebook: www.facebook.com/oracleinsurance Twitter: www.twitter.com/oracleinsurance YouTube: www.youtube.com/oracleinsurance

    Read the article

  • SerializationException Occurring Only in Release Mode

    - by Calvin Nguyen
    Hi, I am working on an ASP.NET web app using Visual Studio 2008 and a third-party library. Things are fine in my development environment. Things are also good if the web app is deployed in Debug configuration. However, when it is deployed in Release mode, SerializationExceptions appear intermittently, breaking other functionality. In the Windows event log, the following error can be seen: "An unhandled exception occurred and the process was terminated. Application ID: DefaultDomain Process ID: 3972 Exception: System.Runtime.Serialization.SerializationException Message: Unable to find assembly 'MyThirdPartyLibrary, Version=1.234.5.67, Culture=neutral, PublicKeyToken=3d67ed1f87d44c89'. StackTrace: at System.Runtime.Serialization.Formatters.Binary.BinaryAssemblyInfo.GetAssembly( ) at System.Runtime.Serialization.Formatters.Binary.ObjectReader.GetType(BinaryAsse mblyInfo assemblyInfo, String name) at System.Runtime.Serialization.Formatters.Binary.ObjectMap..ctor(String objectName, String[] memberNames, BinaryTypeEnum[] binaryTypeEnumA, Object[] typeInformationA, Int32[] memberAssemIds, ObjectReader objectReader, Int32 objectId, BinaryAssemblyInfo assemblyInfo, SizedArray assemIdToAssemblyTable) at System.Runtime.Serialization.Formatters.Binary.ObjectMap.Create(String name, String[] memberNames, BinaryTypeEnum[] binaryTypeEnumA, Object[] typeInformationA, Int32[] memberAssemIds, ObjectReader objectReader, Int32 objectId, BinaryAssemblyInfo assemblyInfo, SizedArray assemIdToAssemblyTable) at System.Runtime.Serialization.Formatters.Binary._BinaryParser.ReadObjectWithMa pTyped(BinaryObjectWithMapTyped record) at System.Runtime.Serialization.Formatters.Binary._BinaryParser.ReadObjectWithMa pTyped(BinaryHeaderEnum binaryHeaderEnum) at System.Runtime.Serialization.Formatters.Binary.__BinaryParser.Run() at System.Runtime.Serialization.Formatters.Binary.ObjectReader.Deserialize(Header Handler handler, __BinaryParser serParser, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage) at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Deserialize(Str eam serializationStream, HeaderHandler handler, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage) at System.Runtime.Remoting.Channels.CrossAppDomainSerializer.DeserializeObject(Me moryStream stm) at System.AppDomain.Deserialize(Byte[] blob) at System.AppDomain.UnmarshalObject(Byte[] blob) For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp." Using FUSLOGVW.exe (i.e., Assembly Binding Log Viewer), I can see the problem is that IIS attempts to find MyThirdPartyLibrary in directory C:\windows\system32\inetsrv. It refuses to look in the bin folder of the web app, where the DLL is actually located. Does anyone know what the problem is? Thanks, Calvin

    Read the article

  • C compiler selection in cabal package

    - by ony
    Today I've tried C compiler (Clang) for C code I use in my haskell library and found that I can gain speed increase in comparsing with my system compiler (GCC 4.4.3) from 426.404 Gbit/s to 0.823 Tbit/s So I decided to add some flags to control the way that C source file is compiled (i.e. something like use-clang, use-intel etc.). Snippet of cabal package description file: C-Sources: c_lib/tiger.c Include-Dirs: c_lib Install-Includes: tiger.h if flag(debug) GHC-Options: -debug -Wall -fno-warn-orphans CPP-Options: -DDEBUG CC-Options: -DDEBUG -g else GHC-Options: -Wall -fno-warn-orphans Question is: which options in descritpion file need to be modified to change C compiler used to compile "c_lib/tiger.c"? I did found only CC-Options.

    Read the article

  • Problems using PC as a media server with PS3

    - by Tiger
    I recently got a PS3 and decided to take advantage of the fact that it can be used to stream movies by making my PC a media server. I've done this in the past with the same router I have now before I sold my old PS3, but not on this PC. I've tried using both Tversity and PS3 Media server, but I don't think the problem lies within the configuration of either of those programs because I am unable to ping the PS3. This problem only occurs when I am using a wired connection on my pc, attempting to connect to a WLAN connection on the PS3. If I switch to WLAN on my PC I can successfully ping the PS3 and connect to the media server. Thanks

    Read the article

  • SQL*Plus??? - ??????????????(????? ???Tips-2)

    - by Yuichi.Hayashi
    script??????????????????????????SQL*Plus???????????????????SQL*Plus????????????????????????? ????????????????SQL*Plus???????????????????? SQL*Plus?-s????????????????????????????? ??????????????????????????????????? <-s??????????> $ sqlplus scott/tiger SQL*Plus: Release 11.2.0.1.0 Production on ? 12? 22 17:14:14 2010 Copyright (c) 1982, 2009, Oracle. All rights reserved. Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options ????????? SQL <-s???????????> $ sqlplus -s scott/tiger select sysdate from dual; SYSDATE -------- 10-12-22 exit $ (Written by Hiroyuki Nakaie)

    Read the article

  • SQL*Plus??? - SQL??????????????????(????? ???Tips-3)

    - by Yuichi.Hayashi
    ??SQL??????????????? $ sqlplus @batch1.sql ?????SQL????????????????????????????????????? ????????????????????????????SQL?????????????????????????? SQL*Plus?????????????????????????????????????????????????SQL????????????????? ???????????????????????? #!/bin/sh sqlplus -s /nolog    conn scott/tiger"    select sysdate from dual;    exit EOF ??????????????????SQL???????????? ?EOF?????????EndOfSQL??????????????????????????????????????????????????? ????????????????????????????????? ????????????????????????SQL???????????????????????????SQL????????????????????????????????????????? ??????SELECT???????????????????????? #!/bin/sh table_name=dual sqlplus -s /nolog    conn scott/tiger    select sysdate from $table_name;    exit EndOfSQL (Written by Hiroyuki Nakaie)

    Read the article

  • simple network between xp & 7 with cross cable problem...

    - by LostLord
    hi my dear friends : i have a simple network between xp & 7 windowses with cross cable (2 pc home)... ===================================================================== the one with 7 is mother and have 2 lan device (onboard + pci) A. onboard is like this when u go to tcp/ip v4 properties:(4 adsl internet) obtain an ip... preferred dns server : 81.91.129.67 alternate dns server : 4.2.2.4 shared...no permission 4 change so every thing is ok for internet on windows 7. B. the other lan pci card that is connected to pc with xp is like this : 192.168.2.11 255.255.255.0 0.0.0.0 empty empry computer name : cougar workgroup : nethome homeNetwork is disabled (i think that is 4 2 pc's with 7 os not xp) every thing is off in network options except file & printer sharing in public area ===================================================================== pc with xp os is like this : 192.168.2.12 255.255.255.0 192.168.2.11 (mean gateway) 4.2.2.4 8.8.8.8 computer name : tiger workgroup : nethome ===================================================================== at last my little net is ok... mean both have internet , both can see each other by their ip (\\192.168.2.11 or \\192.168.2.12) my problem is when in pc with xp type \\cougar it shows an error about network path! but in pc with 7 \\tiger works perfec. what is the problem in system with xp ? in few days ago this network was ok (search by computer name) when both os were xp , so there is no problem with my cable or devices. another problem is i can not find tiger in my network list in 7 pc \ why? is something wrong with my network? thanks 4 future advance best regards

    Read the article

  • SQL to search duplicates

    - by Ram
    I have a table for animals like Lion Tiger Elephant Jaguar List item Cheetah Puma Rhino I want to insert new animals in this table and I am t reading the animal names from a CSV file. Suppose I got following names in the file Lion,Tiger,Jaguar as these animals are already in "Animals" table, What should be a single SQL query that will determine if the animals are already exist in the table.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10  | Next Page >