Search Results

Search found 1341 results on 54 pages for 'factor mystic'.

Page 46/54 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Difference between two PHP times as years, months and days for PHP version 5.2

    - by Dominor Novus
    Forward: I've scanned through the existing questions/answers on this matter. This is not a duplicitous question; I cannot find a working solution from the accepted answers. The main questions/answers I've reviewed can be found here: How to calculate the difference between two dates using PHP? What I need: A calucalation of the difference between two dates expressed as years, months and days that works with PHP version: 5.2. <?php $current_date = date('d-M-Y'); $future_date = '2012-11-01'; ?> What I've tried: Most answers I find online don't seem to exact in that they don't factor in leap years. This highly rated answer won't work because DateTime-diff() is php 5.3+. This accepted answer results in (i.e. the second block of code aimed at PHP 5.2) results in the following being parsed: Array ( [y] = 25 [m] = 11 [d] = 7 [h] = 3 [i] = 15 [s] = 19 [invert] = 0 [days] = 9473 ) Array ( [y] = 25 [m] = 11 [d] = 7 [h] = 3 [i] = 15 [s] = 19 [invert] = 1 [days] = 9473 ) I can't tell if I've incorrectly applied the code or it's simply a case of me not knowing how to manipulate the array.

    Read the article

  • Sync video play over network

    - by Nemesis
    Hi, I have made a media player that plays basically anything that's scheduled to it via a text file. The player can also play the exact same clip on multiple machines(PC's). The problem is the syncing. The same video starts playing on each of the machines, but they are out by about 400ms, which looks crap and if there's sound it's even worse. What I do at the moment is: One machine is set up as the master and all other machines are set up as slaves. The master decides what item will be played. It waits for a message from each of the slaves, once all slaves are connected (or after the timeout), it broadcasts the item id of the file that needs to be played. All machines then start playing that file. What I also tried: I thought that the file loading time might be the major driving factor in the sync mismatch, so I chankged the code to do the following. The master still decides what file to play. It waits for the connect message from each slave (or timeout) and transmits the item id of the file to play. All machines start playing that file but pauses it immediately. The master then again waits for a ready message from each of the slaves. As soon as all slaves responded the master sends a play message to all slaves. All machines then continue the file. This unfortunately did not improve the problem. I am now pretty sure the sync mismatch is due to network delay. How can I compensate for this? Or maybe determine the delay to each slave? All network comms are done with winsock. Any thoughts or ideas is much appreciated.

    Read the article

  • What is the fastest (to access) struct-like object in Python?

    - by DNS
    I'm optimizing some code whose main bottleneck is running through and accessing a very large list of struct-like objects. Currently I'm using namedtuples, for readability. But some quick benchmarking using 'timeit' shows that this is really the wrong way to go where performance is a factor: Named tuple with a, b, c: >>> timeit("z = a.c", "from __main__ import a") 0.38655471766332994 Class using __slots__, with a, b, c: >>> timeit("z = b.c", "from __main__ import b") 0.14527461047146062 Dictionary with keys a, b, c: >>> timeit("z = c['c']", "from __main__ import c") 0.11588272541098377 Tuple with three values, using a constant key: >>> timeit("z = d[2]", "from __main__ import d") 0.11106188992948773 List with three values, using a constant key: >>> timeit("z = e[2]", "from __main__ import e") 0.086038238242508669 Tuple with three values, using a local key: >>> timeit("z = d[key]", "from __main__ import d, key") 0.11187358437882722 List with three values, using a local key: >>> timeit("z = e[key]", "from __main__ import e, key") 0.088604143037173344 First of all, is there anything about these little timeit tests that would render them invalid? I ran each several times, to make sure no random system event had thrown them off, and the results were almost identical. It would appear that dictionaries offer the best balance between performance and readability, with classes coming in second. This is unfortunate, since, for my purposes, I also need the object to be sequence-like; hence my choice of namedtuple. Lists are substantially faster, but constant keys are unmaintainable; I'd have to create a bunch of index-constants, i.e. KEY_1 = 1, KEY_2 = 2, etc. which is also not ideal. Am I stuck with these choices, or is there an alternative that I've missed?

    Read the article

  • Creating a join based on data from other tables...

    - by Workshop Alex
    I'm dealing with a database structure that can be defined as "illogical". It has about 100 different schema's with all different table structures per schema. Only one common factor is a "Version" table in each schema containing about 4 fields. (Thus, there are about 100 Version tables in the database.) There's also another table (view, actually) containing a list of all the schema's in the database that have a version table. I need a stored procedure that walks through all the schema's and selects all data from the Version table, adding the schema name as a fifth field to the result. Basically, this stored procedure is to return a list of all version records per schema. My idea: first walk through the schema list to create one new SQL statements that will JOIN all the schema.version tables into one SQL statement. Then I return the result of that query. How to do this? Or does anyone have a better suggestion? (No, redesigning the structure is NOT an option.)

    Read the article

  • plot an item map (based on difficulties)

    - by Tyler Rinker
    I have a data set of item difficulties that correspond to items on a questionnaire that looks like this: item difficulty 1 ITEM_6: I DESTROY THINGS BELONGING TO OTHERS 2.31179818 2 ITEM_11: I PHYSICALLY ATTACK PEOPLE 1.95215238 3 ITEM_5: I DESTROY MY OWN THINGS 1.93479536 4 ITEM_10: I GET IN MANY FIGHTS 1.62610855 5 ITEM_19: I THREATEN TO HURT PEOPLE 1.62188759 6 ITEM_12: I SCREAM A LOT 1.45137544 7 ITEM_8: I DISOBEY AT SCHOOL 0.94255210 8 ITEM_3: I AM MEAN TO OTHERS 0.89941812 9 ITEM_20: I AM LOUDER THAN OTHER KIDS 0.72752197 10 ITEM_17: I TEASE OTHERS A LOT 0.61792597 11 ITEM_9: I AM JEALOUS OF OTHERS 0.61288399 12 ITEM_4: I TRY TO GET A LOT OF ATTENTION 0.39947791 13 ITEM_18: I HAVE A HOT TEMPER 0.32209970 14 ITEM_13: I SHOW OFF OR CLOWN 0.31707701 15 ITEM_7: I DISOBEY MY PARENTS 0.20902108 16 ITEM_2: I BRAG 0.19923607 17 ITEM_15: MY MOODS OR FEELINGS CHANGE SUDDENLY 0.06023317 18 ITEM_14: I AM STUBBORN -0.31155481 19 ITEM_16: I TALK TOO MUCH -0.67777282 20 ITEM_1: I ARGUE A LOT -1.15013758 I want to make an item map of these items that looks similar (not exactly) to this (I created this in word but it lacks true scaling as I just eyeballed the scale). It's not really a traditional statistical graphic and so I don't really know how to approach this. I don't care what graphics system this is done in but I am more familiar with ggplot2 and base. I would greatly appreciate a method of plotting this sort of unusual plot. Here's the data set (I'm including it as I was having difficulty using read.table on the dataframe above): DF <- structure(list(item = structure(c(17L, 3L, 16L, 2L, 11L, 4L, 19L, 14L, 13L, 9L, 20L, 15L, 10L, 5L, 18L, 12L, 7L, 6L, 8L, 1L ), .Label = c("ITEM_1: I ARGUE A LOT", "ITEM_10: I GET IN MANY FIGHTS", "ITEM_11: I PHYSICALLY ATTACK PEOPLE", "ITEM_12: I SCREAM A LOT", "ITEM_13: I SHOW OFF OR CLOWN", "ITEM_14: I AM STUBBORN", "ITEM_15: MY MOODS OR FEELINGS CHANGE SUDDENLY", "ITEM_16: I TALK TOO MUCH", "ITEM_17: I TEASE OTHERS A LOT", "ITEM_18: I HAVE A HOT TEMPER", "ITEM_19: I THREATEN TO HURT PEOPLE", "ITEM_2: I BRAG", "ITEM_20: I AM LOUDER THAN OTHER KIDS", "ITEM_3: I AM MEAN TO OTHERS", "ITEM_4: I TRY TO GET A LOT OF ATTENTION", "ITEM_5: I DESTROY MY OWN THINGS", "ITEM_6: I DESTROY THINGS BELONGING TO OTHERS", "ITEM_7: I DISOBEY MY PARENTS", "ITEM_8: I DISOBEY AT SCHOOL", "ITEM_9: I AM JEALOUS OF OTHERS" ), class = "factor"), difficulty = c(2.31179818110545, 1.95215237740899, 1.93479536058926, 1.62610855327073, 1.62188759115818, 1.45137543733965, 0.942552101641177, 0.899418119889782, 0.7275219669431, 0.617925967008653, 0.612883990709181, 0.399477905189577, 0.322099696946661, 0.31707700560997, 0.209021078266059, 0.199236065264793, 0.0602331732900628, -0.311554806052955, -0.677772822413495, -1.15013757942119)), .Names = c("item", "difficulty" ), row.names = c(NA, -20L), class = "data.frame") Thank you in advance.

    Read the article

  • Combine MD5 hashes of multiple files

    - by user685869
    I have 7 files that I'm generating MD5 hashes for. The hashes are used to ensure that a remote copy of the data store is identical to the local copy. Unfortunately, the link between these two copies of the data is mind numbingly slow. Changes to the data are very rare but I have a requirement that the data be synchronized at all times (or as soon as possible). Rather than passing 7 different MD5 hashes across my (extremely slow) communications link, I'd like to generate the hash for each file and then combine these hashes into a single hash which I can then transfer and then re-calculate/use for comparison on the remote side. If the "combined hash" differs, then I'd start sending the 7 individual hashes to determine exactly which file(s) have been changed. For example, here are the MD5 hashes for the 7 files as of last week: 0709d609d69385255c496436eb50402c 709465a74411bd596595c7b9b158ae6a 4ab657320ef33e3d5eb498e4c13d41b7 3b49c6ab199994fd776bb63761414e72 0fc28c5a010fc3c06c0c930c88e31a15 c4ecd214662cac5aae0e53f6f252bf0e 8b086431e43148a2c2d943ba30d31cc6 I'd like to combine these hashes together such that I get a single unique value (perhaps another MD5 hash?) that I can then send to the remote system. On the remote system, I'd then perform the same calculation to determine if the data as a whole has been changed. If it has, then I'd start sending the individual hashes, etc. The most important factor is that my "combined hash" be short enough so that it uses less bandwidth than just sending all 7 hashes in the first place. I thought of writing the 7 MD5 hashes to a file and then hashing that file but is there a better way?

    Read the article

  • naming a function that exhibits "set if not equal" behavior

    - by Chris Sears
    This might be an odd question, but I'm looking for a word to use in a function name. I'm normally good at coming up with succinct, meaningful function names, but this one has me stumped so I thought I'd appeal for help. The function will take some desired state as an argument and compare it to the current state. If no change is needed, the function will exit normally without doing anything. Otherwise, the function will take some action to achieve the desired state. For example, if wanted to make sure the front door was closed, i might say: my_house.<something>_front_door('closed') What word or term should use in place of the something? I'd like it to be short, readable, and minimize the astonishment factor. A couple clarifying points... I would want someone calling the function to intuitively know they didn't need to wrap the function an 'if' that checks the current state. For example, this would be bad: if my_house.front_door_is_open(): my_house.<something>_front_door('closed') Also, they should know that the function won't throw an exception if the desired state matches the current state. So this should never happen: try: my_house.<something>_front_door('closed') except DoorWasAlreadyClosedException: pass Here are some options I've considered: my_house.set_front_door('closed') my_house.setne_front_door('closed') # ne=not equal, from the setne x86 instruction my_house.ensure_front_door('closed') my_house.configure_front_door('closed') my_house.update_front_door('closed') my_house.make_front_door('closed') my_house.remediate_front_door('closed') And I'm open to other forms, but most I've thought of don't improve readability. Such as... my_house.ensure_front_door_is('closed') my_house.conditionally_update_front_door('closed') my_house.change_front_door_if_needed('closed') Thanks for any input!

    Read the article

  • how do you convert data frame to a custom json format

    - by user1471980
    I need to convert a data frame to json fomrat in R. My data frame y is this: dput(y) structure(list(Name = structure(c(38L, 23L, 16L, 35L, 21L, 6L, 34L, 15L, 46L, 1L, 43L, 28L, 39L, 27L, 7L, 20L, 14L, 44L, 48L, 36L), .Label = c("server09", "server10", "server11", "server12", "server13", "server14", "server15", "server16", "server17", "server18", "server19", "server20", "server21", "server22", "server23", "server24", "server25", "server26", "server27", "server28", "server29", "server30", "server31", "server32", "server33", "server34", "server35", "server36", "server37", "server38", "server39", "server40", "server41", "server42", "server43", "server44", "server45", "server46", "server47", "server48", "server49", "server50", "server51", "server52", "server53", "server54", "server55", "server56", "server57", "server58"), class = "factor"), Date = c(1372737600, 1372737602, 1372737609, 1372737617, 1372737618, 1372737618, 1372737643, 1372737646, 1372737648, 1372737652, 1372737654, 1372737660, 1372737665, 1372737671, 1372737699, 1372737701, 1372737718, 1372737721, 1372737728, 1372737731), Cpu = c(3.9025, 36.3042, 2.6075, 3.1338, 0.9474, 0.149, 5.4401, 2.5652, 0.3612, 3.2651, 1.8703, 13.8967, 4.2438, 5.4401, 2.468, 0.9147, 1.4637, 7.2528, 6.119, 7.7009)), .Names = c("Name", "Date", "Cpu"), row.names = c(1L, 42L, 83L, 84L, 125L, 126L, 127L, 168L, 169L, 202L, 203L, 236L, 277L, 318L, 359L, 360L, 361L, 362L, 395L, 396L), class = "data.frame") I need my json file to look like this: [{ "name":'server13', "data": [ [1372737600,3.9025], [1372737602,10], [1372737609,10] ... [1372737731,20] ] }, { "name":'server14', "data": [ [1372737600,4], [1372737602,10], [1372737609,10] ... [1372737731,30] ] }] I used rjson package as the following: p <- toJSON(as.list(y)) I get this output: "{\"Name\":[\"server13\",\"servar14\",...],\"Date\":[1372737600,1372737602,..],\"Cpu\":[3.9025,36.3042,..]}" Is there an easy way to do this in R?

    Read the article

  • Zoom in Java Swing application

    - by Shirky
    Hi there, I am looking for ways to zoom in a Java Swing application. That means that I would like to resize all components in a given JPanel by a given factor as if I would take an screenshot of the UI and just applied an "Image scale" operation. The font size as well as the size of checkboxes, textboxes, cursors etc. has to be adjusted. It is possible to scale a component by applying transforms to a graphics object: protected Graphics getComponentGraphics(Graphics g) { Graphics2D g2d=(Graphics2D)g; g2d.scale(2, 2); return super.getComponentGraphics(g2d); } That works as long as you don't care about self-updating components. If you have a textbox in your application this approach ceases to work since the textbox updates itself every second to show the (blinking) cursor. And since it doesn't use the modified graphics object this time the component appears at the old location. Is there a possibility to change a components graphics object permanently? There is also a problem with the mouse click event handlers. The other possibility would be to resize all child components of the JPanel (setPreferredSize) to a new size. That doesn't work for checkboxes since the displayed picture of the checkbox doesn't change its size. I also thought of programming my own layout manager but I don't think that this will work since layout managers only change the position (and size) of objects but are not able to zoom into checkboxes (see previous paragraph). Or am I wrong with this hypothesis? Do you have any ideas how one could achieve a zoomable Swing GUI without programming custom components? I looked for rotatable user interfaces because the problem seems familiar but I also didn't find any satisfying solution to this problem. Thanks for your help, Chris

    Read the article

  • Project Euler #3

    - by Alex
    Question: The prime factors of 13195 are 5, 7, 13 and 29. What is the largest prime factor of the number 600851475143? I found this one pretty easy, but running the file took an extremely long time, it's been going on for a while and the highest number I've got to is 716151937. Here is my code, am I just going to have a wait or is there an error in my code? //User made class public class Three { public static boolean checkPrime(long p) { long i; boolean prime = false; for(i = 2;i<p/2;i++) { if(p%i==0) { prime = true; break; } } return prime; } } //Note: This is a separate file public class ThreeMain { public static void main(String[] args) { long comp = 600851475143L; boolean prime; long i; for(i=2;i<comp/2;i++) { if(comp%i==0) { prime = Three.checkPrime(i); if(prime==true) { System.out.println(i); } } } } }

    Read the article

  • Sort MySQL query result by a alphanumeric field

    - by Jason Shultz
    I'm querying a table in a db using php. one of the fields is a column called "rank" and has data like the following: none 1-bronze 2-silver 3-gold ... 10-ambassador 11-president I want to be able to sort the results based on that "rank" column. any results where the field is "none" get excluded, so those don't factor in. As you can already guess, right now the results are coming back like this: 1-bronze 10-ambassador 11-president 2-silver 3-gold Of course, I would like for it to be sorted so it is like the following: 1-bronze 2-silver 3-gold ... 10-ambassador 11-president Right now the query is being returned as an object. I've tried different sort options like natsort, sort, array_multisort but haven't got it to work the way I'm sure it can. I would prefer keeping the results in an object form if possible. I'm passing the data on to a view in the next step. although, it's perfectly acceptable to pass the object to the view and then do the work there. so it's not an issue after all. :) thank you for your help. i'm hoping I'm making sense.

    Read the article

  • Where to find new Micro-BTX (uBTX) motherboards? Or should I just replace the box?

    - by John Rudy
    OK, so I'm guessing that it's dead. It's not my machine, and the owner is on a very fixed (IE, none) income. I'm generous, but I'm not that generous, since I already gave him what (at the time) was a fully functional and fairly well-equipped machine. (Aside from the mobo and proc, almost nothing else in it was stock. I'd taken it up to 3GB of RAM, upgraded the hard drive, added a decent video card, installed a wireless adapter, running Vista, etc.) According to further research, the machine uses a Micro-BTX (uBTX) motherboard, and since it's an AMD Athlon64, the AM2 socket. So I'm looking at a few options, and am wondering what's the best route to take? Find an AM2 socket uBTX mobo. I can't find them new online anywhere, leading me to believe that this is an obsolete form factor/chip combination. I don't want a refurb or a system pull because, quite honestly, once I deal with this mess, I don't want to go through it again in another year or two. Find an Intel uBTX mobo and a (relatively -- hah, I still want at least a dual-core) inexpensive Intel CPU. At this point, the only things stock in the machine would be the case and the PSU. :) Buy a bare-bones kit (mobo/proc/PSU/case, sometimes even RAM) from somewhere like CompUSA/TigerDirect or Fry's and move all of the other hardware over. This makes life difficult because the copy of Vista is an upgrade, tied to the copy of XP which shipped on the Gateway, which is OEM and won't install on the new box. :) If I change the CPU brand (AMD to Intel), will I need to reinstall Windows, or can it just be reactivated? Where can I actually find a new, in-box, not system pull, not refurb AM2 uBTX mobo? Do they even exist anymore? What kind of money are we talking (US dollars)? The end goal is to get the machine functional again as cheaply as humanly possible. If it were my own machine, I wouldn't even be asking this, I'd be custom-building a new one. However, it's not mine, I'm shelling out of pocket for the fix (plus the work), and thus want to keep that end price low-low-low.

    Read the article

  • Business Owners - What Remote Desktop Solution Do You Use To Service Your Clients PCs?

    - by Sootah
    Howdy fellow computer geeks, I am the owner of a local computer repair business that primarily services its clients on-site. On the occasions that we do service the machines in the office we generally have one of our techs pick the computer up while they are out and about and bring it back with them. Only rarely will we require the customer to bring us the computer themselves. In order to reduce costs, be much more efficient, and potentially expand our market far beyond what would be feasible with travel required; I am looking at ways that we can service our clients remotely whenever possible. What we're in need of is a solid remote desktop application that will be incredibly easy for our customers to connect to, as well as be robust enough that we don't need the client babysitting the computer during the entire repair. Ideally I would like to use a web-based solution so that we don't have to walk the customers through installing, connecting, and configuring it over the phone. This would be unacceptable because of the level of service they are used to. Effectively we'd want them to be able to just go to a URL, enter a PIN or something, and then they are connected and ready to rumble. (Obviously the option to just email them a link that'd do all this for them would be what we'd be aiming for) Along with the ease of use factor, we would need the product to not require any further intervention on the part of the client after we have connected. Nobody is going to be happy if we have to call them every 15 minutes so they can reconnect to us every time we reboot - so auto-reconnect is an absolute must. The only product I know of right now that does any of this is LogMeIn Rescue. It allows unattended access, the applet is lightweight and installs quickly, and the customer can either enter a PIN on the site or just click a link emailed to them in order to connect. The only real downside I see to LogMeIn Rescue is that it's $120.00/month per technician. While we'd ultimately end up saving far more than that per month just in fuel costs alone, I'd like to explore any other options out there that I may not have come across. So - Are there any equally good products out there? If so what are they, why do you recommend them, how have you been utilizing them yourself, and what do they cost? Thanks in advance for your help! -Sootah

    Read the article

  • What Remote Desktop Solution Do You Use To Service Your Clients' PCs? [closed]

    - by Sootah
    Possible Duplicate: What’s the best Remote Desktop Application? I am the owner of a local computer repair business that primarily services its clients on-site. On the occasions that we do service the machines in the office we generally have one of our techs pick the computer up while they are out and about and bring it back with them. Only rarely will we require the customer to bring us the computer themselves. In order to reduce costs, be much more efficient, and potentially expand our market far beyond what would be feasible with travel required; I am looking at ways that we can service our clients remotely whenever possible. What we're in need of is a solid remote desktop application that will be incredibly easy for our customers to connect to, as well as be robust enough that we don't need the client babysitting the computer during the entire repair. Ideally I would like to use a web-based solution so that we don't have to walk the customers through installing, connecting, and configuring it over the phone. This would be unacceptable because of the level of service they are used to. Effectively we'd want them to be able to just go to a URL, enter a PIN or something, and then they are connected and ready to rumble. (Obviously the option to just email them a link that'd do all this for them would be what we'd be aiming for) Along with the ease of use factor, we would need the product to not require any further intervention on the part of the client after we have connected. Nobody is going to be happy if we have to call them every 15 minutes so they can reconnect to us every time we reboot - so auto-reconnect is an absolute must. The only product I know of right now that does any of this is LogMeIn Rescue. It allows unattended access, the applet is lightweight and installs quickly, and the customer can either enter a PIN on the site or just click a link emailed to them in order to connect. The only real downside I see to LogMeIn Rescue is that it's $120.00/month per technician. While we'd ultimately end up saving far more than that per month just in fuel costs alone, I'd like to explore any other options out there that I may not have come across. Are there any equally good products out there? If so what are they, why do you recommend them, how have you been utilizing them yourself, and what do they cost?

    Read the article

  • iPhone 3G refuses to transfer purchased apps to iTunes

    - by andynormancx
    My iPhone 3G refuses to transfer purchased apps to iTunes. This is causing me major problems with syncing. Whenever I attempt to transfer apps from the iPhone to iTunes it goes through the motions, but never actually transfers anything. It displays the various apps in the info area at the top of the screen, but the progress bar never advances. In comparison when I sync other iPhones, using the same install of iTunes, the progress bar advances and apps are transferred. The same also happens on clean installs of iTunes on other computers, it seems to be my iPhone that is the common factor. I have tried restoring the phone from a backup, which makes no difference. This started happening months ago and the phone has since been upgraded to 3.0 and 3.1, but the problem still persists. Originally it was just a minor irritation, but I made and attempt to fix it which has made things worse. I deleted all the apps from with iTunes and then did "Transfer purchases" in the hope that it might fix something. It didn't fix anything. Also, I cannot now sync at all. If I do sync iTunes now does "transferring purchases", fails to transfer and then deletes all the apps (and data) from my iPhone. It also means I can't sync music, podcasts or anything else. I can't sync anything else, because I can't temporarily turn off app syncing because then iTunes warns that the apps on the iPhone will be deleted. I also tried de-authorising and re-authorising. What can I do to get app syncing working again ? P.S. I have considered deleting all the apps and reinstalling them one by one, in the hope that it will fix the problem. However I don't really want to embark on doing that for 55+ apps and re-entering login details etc for the apps that need them, especially as I might then find out it didn't solve the problem. Update: The latest update to iTunes 9 has improved things in one key aspect. If I let a sync run to completion iTunes no longer deletes all the apps from my phone. So I can now sync all my other data, even if I still can't sync my apps. Resolved: See my answer to the question for how I finally resolved the problem.

    Read the article

  • How to change mouse pointer icon in Xfce Debian 7 Wheezy?

    - by kadaj
    I copied the cursor theme (oxy-neon or Oxygen Neon) to /usr/share/icons and from Applications Menu - Settings - Mouse, I am able to see the new theme. I clicked on it and the pointer doesn't change. However the text typing icon ('I'), busy icon, hand icon, and resize window icons got changed. The pointer icon remains the same, the black Adwaita. I removed the Adwaita folder from the icons folder, and still the mouse pointer doesn't change. Is the pointer theme specified elsewhere? I have no setting under home directory. I tried logging out, restart, restarting xfwm4, but nothing works. I just found that the icon pointer changes when the pointer is inside Firefox, but it's not consistent. It keeps changing when I click menu items. Very weird. Any idea how to fix this? This is the output of running: gsettings list-recursively org.gnome.desktop.interface : ~$ gsettings list-recursively org.gnome.desktop.interface org.gnome.desktop.interface automatic-mnemonics true org.gnome.desktop.interface buttons-have-icons false org.gnome.desktop.interface can-change-accels false org.gnome.desktop.interface clock-format '24h' org.gnome.desktop.interface clock-show-date false org.gnome.desktop.interface clock-show-seconds false org.gnome.desktop.interface cursor-blink true org.gnome.desktop.interface cursor-blink-time 1200 org.gnome.desktop.interface cursor-blink-timeout 10 org.gnome.desktop.interface cursor-size 24 org.gnome.desktop.interface cursor-theme 'Adwaita' org.gnome.desktop.interface document-font-name 'Sans 11' org.gnome.desktop.interface enable-animations true org.gnome.desktop.interface font-name 'Cantarell 11' org.gnome.desktop.interface gtk-color-palette 'black:white:gray50:red:purple:blue:light blue:green:yellow:orange:lavender:brown:goldenrod4:dodger blue:pink:light green:gray10:gray30:gray75:gray90' org.gnome.desktop.interface gtk-color-scheme '' org.gnome.desktop.interface gtk-im-module '' org.gnome.desktop.interface gtk-im-preedit-style 'callback' org.gnome.desktop.interface gtk-im-status-style 'callback' org.gnome.desktop.interface gtk-key-theme 'Default' org.gnome.desktop.interface gtk-theme 'Adwaita' org.gnome.desktop.interface gtk-timeout-initial 200 org.gnome.desktop.interface gtk-timeout-repeat 20 org.gnome.desktop.interface icon-theme 'gnome' org.gnome.desktop.interface menubar-accel 'F10' org.gnome.desktop.interface menubar-detachable false org.gnome.desktop.interface menus-have-icons false org.gnome.desktop.interface menus-have-tearoff false org.gnome.desktop.interface monospace-font-name 'Monospace 11' org.gnome.desktop.interface show-input-method-menu true org.gnome.desktop.interface show-unicode-menu true org.gnome.desktop.interface text-scaling-factor 1.0 org.gnome.desktop.interface toolbar-detachable false org.gnome.desktop.interface toolbar-icons-size 'large' org.gnome.desktop.interface toolbar-style 'both-horiz' org.gnome.desktop.interface toolkit-accessibility false ~$

    Read the article

  • Why is vCenter 5.1u1 exiting hosts from maintenance mode?

    - by Shane Madden
    This vCenter server was just upgraded to 5.1 update 1. I'm going through hosts and bringing firmware up to date, then upgrading them from various versions of 5.0 to 5.1u1. vCenter 5.1u1 seems to have an interesting new behavior: it's removing hosts from maintenance mode when they reconnect after being disconnected -- but very inconsistently, I've seen it maybe 4 or 5 times on ~25-30 host reboots. I've only seen it happen on 5.0 hosts that have not yet been upgraded to 5.1. In the image, I placed the host in maint mode and rebooted it into the HP SPP DVD's automatic update mode. After its usual ~40 minute update process, the host came back online.. and 7 seconds before even logging that the host had reconnected, vCenter had sent the host a task to exit maintenance mode. In my understanding, the only time vCenter should drop a host out of maintenance mode is when vCenter put it into maintenance mode itself (such as a VUM upgrade task). Why would this vCenter be unilaterally exiting a host from user-initiated maintenance mode? Edit, additional info: I ran the firmware upgrades on 5 more hosts, all at the same time. Two of them exited maint mode after reconnecting, three did not. The common factor of those exiting maint mode seems to be how long they were offline; the two that took a few tries to boot to the virtual media are the two that got knocked out of maint mode. esx31 (image above): 45 minutes unresponsive esx19 (exited maint): 87 minutes unresponsive esx24 (stayed in maint): 32 minutes unresponsive esx29 (stayed in maint): 39 minutes unresponsive esx32 (stayed in maint): 30 minutes unresponsive esx34 (exited maint): 70 minutes unresponsive Edit: The disconnect time idea seems to have been a red herring, as it's not happening consistently. Additionally, in the vpxd.log the exit maint mode task initiation seems to always immediately follow this vim.EnvironmentBrowser.queryProvisioningPolicy SOAP call. Here's the lines, slightly trimmed for clarity: 15:27:49.535 [info 'vpxdvpxdVmomi'] [ClientAdapterBase::InvokeOnSoap] Invoke done (esx31, vim.EnvironmentBrowser.queryProvisioningPolicy) 15:27:49.560 [info 'commonvpxLro'] [VpxLRO] -- BEGIN task -- esx31 -- HostSystem.exitMaintenanceMode -- Note that on the nodes that don't get the exit task, the vim.EnvironmentBrowser.queryProvisioningPolicy event still occurs. I'm not seeing any other differences in events before or after this in the reconnect process, aside from the extra events caused by exiting maintenance mode. Given the log's mention of provisioning policies, looking for autodeploy-related maintenance mode issues turns up complaints about similar behavior (though I'm not using autodeploy at all).

    Read the article

  • Why do I see a large performance hit with DRBD?

    - by BHS
    I see a much larger performance hit with DRBD than their user manual says I should get. I'm using DRBD 8.3.7 (Fedora 13 RPMs). I've setup a DRBD test and measured throughput of disk and network without DRBD: dd if=/dev/zero of=/data.tmp bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 4.62985 s, 116 MB/s / is a logical volume on the disk I'm testing with, mounted without DRBD iperf: [ 4] 0.0-10.0 sec 1.10 GBytes 941 Mbits/sec According to Throughput overhead expectations, the bottleneck would be whichever is slower, the network or the disk and DRBD should have an overhead of 3%. In my case network and I/O seem to be pretty evenly matched. It sounds like I should be able to get around 100 MB/s. So, with the raw drbd device, I get dd if=/dev/zero of=/dev/drbd2 bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 6.61362 s, 81.2 MB/s which is slower than I would expect. Then, once I format the device with ext4, I get dd if=/dev/zero of=/mnt/data.tmp bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 9.60918 s, 55.9 MB/s This doesn't seem right. There must be some other factor playing into this that I'm not aware of. global_common.conf global { usage-count yes; } common { protocol C; } syncer { al-extents 1801; rate 33M; } data_mirror.res resource data_mirror { device /dev/drbd1; disk /dev/sdb1; meta-disk internal; on cluster1 { address 192.168.33.10:7789; } on cluster2 { address 192.168.33.12:7789; } } For the hardware I have two identical machines: 6 GB RAM Quad core AMD Phenom 3.2Ghz Motherboard SATA controller 7200 RPM 64MB cache 1TB WD drive The network is 1Gb connected via a switch. I know that a direct connection is recommended, but could it make this much of a difference? Edited I just tried monitoring the bandwidth used to try to see what's happening. I used ibmonitor and measured average bandwidth while I ran the dd test 10 times. I got: avg ~450Mbits writing to ext4 avg ~800Mbits writing to raw device It looks like with ext4, drbd is using about half the bandwidth it uses with the raw device so there's a bottleneck that is not the network.

    Read the article

  • Quantifying the effects of partition mis-alignment

    - by Matt
    I'm experiencing some significant performance issues on an NFS server. I've been reading up a bit on partition alignment, and I think I have my partitions mis-aligned. I can't find anything that tells me how to actually quantify the effects of mis-aligned partitions. Some of the general information I found suggests the performance penalty can be quite high (upwards of 60%) and others say it's negligible. What I want to do is determine if partition alignment is a factor in this server's performance problems or not; and if so, to what degree? So I'll put my info out here, and hopefully the community can confirm if my partitions are indeed mis-aligned, and if so, help me put a number to what the performance cost is. Server is a Dell R510 with dual E5620 CPUs and 8 GB RAM. There are eight 15k 2.5” 600 GB drives (Seagate ST3600057SS) configured in hardware RAID-6 with a single hot spare. RAID controller is a Dell PERC H700 w/512MB cache (Linux sees this as a LSI MegaSAS 9260). OS is CentOS 5.6, home directory partition is ext3, with options “rw,data=journal,usrquota”. I have the HW RAID configured to present two virtual disks to the OS: /dev/sda for the OS (boot, root and swap partitions), and /dev/sdb for a big NFS share: [root@lnxutil1 ~]# parted -s /dev/sda unit s print Model: DELL PERC H700 (scsi) Disk /dev/sda: 134217599s Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 63s 465884s 465822s primary ext2 boot 2 465885s 134207009s 133741125s primary lvm [root@lnxutil1 ~]# parted -s /dev/sdb unit s print Model: DELL PERC H700 (scsi) Disk /dev/sdb: 5720768639s Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 34s 5720768606s 5720768573s lvm Edit 1 Using the cfq IO scheduler (default for CentOS 5.6): # cat /sys/block/sd{a,b}/queue/scheduler noop anticipatory deadline [cfq] noop anticipatory deadline [cfq] Chunk size is the same as strip size, right? If so, then 64kB: # /opt/MegaCli -LDInfo -Lall -aALL -NoLog Adapter #0 Number of Virtual Disks: 2 Virtual Disk: 0 (target id: 0) Name:os RAID Level: Primary-6, Secondary-0, RAID Level Qualifier-3 Size:65535MB State: Optimal Stripe Size: 64kB Number Of Drives:7 Span Depth:1 Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default Number of Spans: 1 Span: 0 - Number of PDs: 7 ... physical disk info removed for brevity ... Virtual Disk: 1 (target id: 1) Name:share RAID Level: Primary-6, Secondary-0, RAID Level Qualifier-3 Size:2793344MB State: Optimal Stripe Size: 64kB Number Of Drives:7 Span Depth:1 Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default Number of Spans: 1 Span: 0 - Number of PDs: 7 If it's not obvious, virtual disk 0 corresponds to /dev/sda, for the OS; virtual disk 1 is /dev/sdb (the exported home directory tree).

    Read the article

  • RHEL5: Can't create sparse file bigger than 256GB in tmpfs

    - by John Kugelman
    /var/log/lastlog gets written to when you log in. The size of this file is based off of the largest UID in the system. The larger the maximum UID, the larger this file is. Thankfully it's a sparse file so the size on disk is much smaller than the size ls reports (ls -s reports the size on disk). On our system we're authenticating against an Active Directory server, and the UIDs users are assigned end up being really, really large. Like, say, UID 900,000,000 for the first AD user, 900,000,001 for the second, etc. That's strange but should be okay. It results in /var/log/lastlog being huuuuuge, though--once an AD user logs in lastlog shows up as 280GB. Its real size is still small, thankfully. This works fine when /var/log/lastlog is stored on the hard drive on an ext3 filesystem. It breaks, however, if lastlog is stored in a tmpfs filesystem. Then it appears that the max file size for any file on the tmpfs is 256GB, so the sessreg program errors out trying to write to lastlog. Where is this 256GB limit coming from, and how can I increase it? As a simple test for creating large sparse files I've been doing: dd if=/dev/zero of=sparse-file bs=1 count=1 seek=300GB I've tried Googling for "tmpfs max file size", "256GB filesystem limit", "linux max file size", things like that. I haven't been able to find much. The only mention of 256GB I can find is that ext3 filesystems with 2KB blocks are limited to 256GB files. But our hard drives are formatted with 4K blocks so that doesn't seem to be it--not to mention this is happening in a tmpfs mounted ON TOP of the hard drive so the ext3 partition shouldn't be a factor. This is all happening on a 64-bit Red Hat Enterprise Linux 5.4 system. Interestingly, on my personal development machine, which is a 32-bit Fedora Core 6 box, I can create 300GB+ files in tmpfs filesystems no problem. On the RHEL5.4 systems it is no go.

    Read the article

  • Core i7 c1e and speedstepping - BSOD on shutdown

    - by DeaconDesperado
    I'm having an interesting problem with my recent Core i7 Digital Audio workstation build that I am curious to see if others have encountered. First, here are the specs on the machine. ASUS P6TD Deluxe Intel X58 Socket LGA1366 MB Intel Core i7-950 3.06Ghz 8M LGA1366 CPU CORSAIR DOMINATOR 6GB (3 x 2GB) 240-Pin DDR3 SDRAM DDR3 1600 Western Digital Caviar Black WD5001AALS 500GB Plus a couple ASUS optical drives and a 750W Corsair PSU. Running Windows 7 x64. All this is connected to the nefarious Digi 002 firewire audio interface for use with Pro Tools. I following mostly the specs posted by many other I7 users in the digidesign community who pooled their collective knowledge in this thread. Now after completing my build, I fell victim to the "UD5 squeal" described at that forum thread. So taking the advice posted, I disabled c1e advanced halt state and Intel speed stepping (I would likely have done this anyway to maintain a stable clock, power consumption isn't really a relevant concern on this machine.) I enabled XMP to set the ram timings properly as well. What I am experiencing is a BSOD upon shutdown, but only immediately after windows fully exits and ends all processes. The error is a MACHINE_CHECK_EXCEPTION 0x000000. The funny thing is that it is extremely intermitent and only occurs if the shutdown immediately followed a period of relative idleness. It does not a generate a minidump, I suspect because windows monitoring has terminated by the time this error occurs. No damage is evident and one can simply turn off manually and the system will act as though a proper shutdown had occurred. If anything it is a annoyance, I just want to be certain it is not affecting my long term stability. I have read that the i7 950 does not like DRAM voltages past 1.65, but that they are acceptable if they are within .5 of the BLCK setting. I have tried disabling XMP and setting all timings to auto and the problem still manifests in an identical way. It is suspect that the cpu idleness preceding shutdown is the determining factor, as both c1e and speedstepping are both settings intended to modify handling of this state. Any suggestions or prior experiences would be greatly appreciated. EDIT: The behavior very closely resembles what's described in this thread: http://www.tomshardware.com/forum/12003-63-shut-problem-windows The benign nature of it of is identical. I can't seem to download the hotfix cited there however.

    Read the article

  • Empty $_POST data

    - by Antimony
    I am trying to post a post to my MyBB server from a Python script, but try as I might, I can't get it to work. The request shows up in the forensic log and the headers are in the $_SERVER variable, but $_POST is always an empty array. The error log shows nothing, even at the debug level. I've already tried searching, but I haven't found anything that's helped. I already checked the post_max_size thing, which is 8M. Another factor is that it's just my own requests which aren't going through. Browser generated requests seem to do just fine. I've looked and looked, but I can't find anything I'm doing differently that should matter. Anyway, here is an example request. POST /newreply.php?tid=1&processed=1 HTTP/1.1 Host: <redacted> Accept-Encoding: identity Content-Length: 1153 Content-Type: multipart/form-data; boundary=-->0xa216654L Cookie: sid=<redacted>; mybb[lastvisit]=1354995469; mybb[lastactive]=1354995500; mybb[threadread]=a%3A1%3A%7Bi%3A1%3Bi%3A1354995469%3B%7D; mybb[forumread]=a%3A1%3A%7Bi%3A2%3Bi%3A1354995469%3B%7D; loginattempts=1; mybbuser=2_ZlVVfaYS9FstZGQzr4KiNRUm3Z4xAgJkTPPq2ouFcuaragOTVQ Accept: text/html User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:14.0) Gecko/20100101 Firefox/14.0.1 -->0xa216654L Content-Disposition: form-data; name="my_post_key" 257b2bbef4334000d9088169154900a3 -->0xa216654L Content-Disposition: form-data; name="quoted_ids" -->0xa216654L Content-Disposition: form-data; name="tid" 1 -->0xa216654L Content-Disposition: form-data; name="message" foo!2 -->0xa216654L Content-Disposition: form-data; name="attachmentact" -->0xa216654L Content-Disposition: form-data; name="attachmentaid" -->0xa216654L Content-Disposition: form-data; name="icon" -1 -->0xa216654L Content-Disposition: form-data; name="posthash" e93a2c78ce3f6807a86fd475ef4178cf -->0xa216654L Content-Disposition: form-data; name="postoptions[subscriptionmethod]" -->0xa216654L Content-Disposition: form-data; name="replyto" -->0xa216654L Content-Disposition: form-data; name="message_new" foo!2 -->0xa216654L Content-Disposition: form-data; name="submit" Post Reply -->0xa216654L Content-Disposition: form-data; name="attachment"; filename="" Content-Type: application/octet-stream -->0xa216654L Content-Disposition: form-data; name="action" do_newreply -->0xa216654L Content-Disposition: form-data; name="subject" Lol -->0xa216654L

    Read the article

  • Possible capacitor plague - need help identifying

    - by cornjuliox
    I've been having some PC power issues lately, and I think I've tracked it down to a bad power supply. Lately, when I'm on my PC it will often restart without warning, displaying "Hypertransport sync flood error occurred last boot." once POST finishes. I've googled the error, but can't come to a definitive conclusion as to what's causing it. I've seen posts suggesting that it might be a power supply issue, but nothing conclusive. Here's what I've done so far: -I haven't installed anything suspect within the last 3 months. -I do overclock just a tiny bit, so I tried raising the voltages a little. That didn't work so I brought both CPU multiplier and voltages all back to their default settings, but that didn't solve the problem either. The problem still occurs. -AV scanned the whole system, nothing suspect. -I suspected that it might be a bad power supply so I cracked that open and found the following: I think it might be cap plague, but I'm not sure. It looks more like glue TBH. Could someone help me figure out what might be wrong with this PC? EDIT: Sometimes, after these restarts, I noticed that the GPU fan doesn't spin up, and the single rear case fan that just happens to be connected to the same molex Y-cable as the GFX card doesn't spin up either. Anything to that? EDIT 2: I do use the system quite heavily, but I don't know how that will factor into this. I often play Diablo 3 and EVE Online at the same time, frequently alt-tabbing between the two. I also have Firefox open in the background, sometimes with several tabs, and if I feel like it, I'll mute the in-game sound and open foobar2000 for better music. Could it be that I'm just pushing this thing too hard? EDIT 3: I also noticed something odd. Right before I experience these restarts, my monitor would suffer from very faint lines of static moving across the screen. The monitor is still very much useable, but it is very annoying. Following the restart it disappears, and then would gradually re-appear over the next few days, and then restarts again. I find it to be very odd. System specs for good measure: Orion 600 W PSU AMD Athlon II X3 440 (overclocked to 3.14 ghZ, raised the CPU multiplier to x13 from x10) MSI G40-775 motherboard 1 GB inno3D GTX 550 ti 4 GB DDR3 RAM 500 GB Samsung SATA HD

    Read the article

  • MySQL performance over a (local) network much slower than I would expect

    - by user15241
    MySQL queries in my production environment are taking much longer than I would expect them too. The site in question is a fairly large Drupal site, with many modules installed. The webserver (Nginx) and database server (mysql) are hosted on separated machines, connected by a 100mbps LAN connection (hosted by Rackspace). I have the exact same site running on my laptop for development. Obviously, on my laptop, the webserver and database server are on the same box. Here are the results of my database query times: Production: Executed 291 queries in 320.33 milliseconds. (homepage) Executed 517 queries in 999.81 milliseconds. (content page) Development: Executed 316 queries in 46.28 milliseconds. (homepage) Executed 586 queries in 79.09 milliseconds. (content page) As can clearly be seen from these results, the time involved with querying the MySQL database is much shorter on my laptop, where the MySQL server is running on the same database as the web server. Why is this?! One factor must be the network latency. On average, a round trip from from the webserver to the database server takes 0.16ms (shown by ping). That must be added to every singe MySQL query. So, taking the content page example above, where there are 517 queries executed. Network latency alone will add 82ms to the total query time. However, that doesn't account for the difference I am seeing (79ms on my laptop vs 999ms on the production boxes). What other factors should I be looking at? I had thought about upgrading the NIC to a gigabit connection, but clearly there is something else involved. I have run the MySQL performance tuning script from http://www.day32.com/MySQL/ and it tells me that my database server is configured well (better than my laptop apparently). The only problem reported is "Of 4394 temp tables, 48% were created on disk". This is true in both environments and in the production environment I have even tried increasing max_heap_table_size and Current tmp_table_size to 1GB, with no change (I think this is because I have some BLOB and TEXT columns).

    Read the article

  • Tips for maximizing Nginx requests/sec?

    - by linkedlinked
    I'm building an analytics package, and project requirements state that I need to support 1 billion hits per day. Yep, "billion". In other words, no less than 12,000 hits per second sustained, and preferably some room to burst. I know I'll need multiple servers for this, but I'm trying to get maximum performance out of each node before "throwing more hardware at it". Right now, I have the hits-tracking portion completed, and well optimized. I pretty much just save the requests straight into Redis (for later processing with Hadoop). The application is Python/Django with a gunicorn for the gateway. My 2GB Ubuntu 10.04 Rackspace server (not a production machine) can serve about 1200 static files per second (benchmarked using Apache AB against a single static asset). To compare, if I swap out the static file link with my tracking link, I still get about 600 requests per second -- I think this means my tracker is well optimized, because it's only a factor of 2 slower than serving static assets. However, when I benchmark with millions of hits, I notice a few things -- No disk usage -- this is expected, because I've turned off all Nginx logs, and my custom code doesn't do anything but save the request details into Redis. Non-constant memory usage -- Presumably due to Redis' memory managing, my memory usage will gradually climb up and then drop back down, but it's never once been my bottleneck. System load hovers around 2-4, the system is still responsive during even my heaviest benchmarks, and I can still manually view http://mysite.com/tracking/pixel with little visible delay while my (other) server performs 600 requests per second. If I run a short test, say 50,000 hits (takes about 2m), I get a steady, reliable 600 requests per second. If I run a longer test (tried up to 3.5m so far), my r/s degrades to about 250. My questions -- a. Does it look like I'm maxing out this server yet? Is 1,200/s static files nginx performance comparable to what others have experienced? b. Are there common nginx tunings for such high-volume applications? I have worker threads set to 64, and gunicorn worker threads set to 8, but tweaking these values doesn't seem to help or harm me much. c. Are there any linux-level settings that could be limiting my incoming connections? d. What could cause my performance to degrade to 250r/s on long-running tests? Again, the memory is not maxing out during these tests, and HDD use is nil. Thanks in advance, all :)

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >