Search Results

Search found 12439 results on 498 pages for 'wondering'.

Page 414/498 | < Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >

  • Strategy for Incremental Datasource fetchings in Excel

    - by user1352530
    I am in an scenario with a table that is refresh by a third app every week. I need to keep accumulating all data in Excel, using an ODBC connection to the database. I am wondering Approach 1: Is there a way to force Excel to append results for every update (this update would be triggered according to a parameter that indicates week)? I tried to define the table for which the connection loads using a dynamic reference but once is anchored first time, table position is never redefined Approach 2: Use an ETL to accumulate all weekly results into a staging table and then connect Excel to it in real time. But, I would need a mechanism for caching old data, as I cannot grow exponentially the time Excel opens. Imagine after 10 years, Excel would need to update at opening 10 years fo data before showing it. Is there a way to store already fetched data and increment it at real time (when book is opened) by selecting new data (with a query/filter of something) Thanks EDIT: Maybe it's better to ask it that way: What is the optimal strategy for a table that keeps growing and needs to be read in real time by Excel? I just don't want to fetch absolutely all data after some months...

    Read the article

  • Would a PHP application benefit from being served from a RAM drive?

    - by Tom Marthenal
    I am in charge of hosting a PHP application that is large and slow, but easy to scale. The application is entirely static, with writable disk storage needed. We've profiled the application, and the main bottleneck appears to come from loading the application and not the work the application does. The application is not CPU-intensive, although it does use a fair amount of memory (think Magento). Currently we distribute it by having a series of servers with the same PHP files on their hard drive and a load balancer in front of them. Easy but expensive. I've been reading about RAM disks and the IO benefits they offer, and was wondering if they would be well-suited to PHP applications. Since PHP applications are loaded from disk for every request and often involve lots of different files (as opposed to being kept in memory like with a Java application), I would figure that disk performance can be a severe bottleneck. Would placing the PHP files on a RAM disk and using the mount point as Apache's document root offer performance benefits? A startup script could create the RAM drive and then copy the files (which are plain-text and small) from a permanent location to the temporary RAM drive. Does this make sense, or should I just trust the linux kernel to cache the appropriate files in memory by itself?

    Read the article

  • Onboard Ethernet suddenly stopped working to router

    - by AfterschoolHobbist
    Hey guys, yesterday I have a sudden problem of my Ethernet connection to my router. My computer is a Pendium 4 with Window XP SP3 running on it and it was working fine earlier in the day, but yesterday night was the start of the problem. My computer is unable to ping to the router and to any other website and unable to get internet connection. As beside it was connected to a hub, I directly connected it to the router directly and the same problem occurred, being unable to ping to the router (and connecting to it) and still no internet access. My directly connected to the modem and the problem still persisted. For each connection, I connected it to my computer and my laptop and my laptop was able to connect, while my desktop computer was unable. I wondered if it was a OS issue and ran a live Ubuntu CD to see if Ubuntu was able to connect to the internet, but the issue persisted and I was unable to get internet access. I then set my router's lease time to 1 hour and waited. After 1 hour, the lease for my computer was removed and I hoped this worked, but it didn't work, but something strange is acting up. My desktop computer is still unable to ping to the router or connect to the internet, but for some reason, my router and desktop computer is still able to contact each other by providing a lease of an local ip address. The router record of a lease to my desktop computer, and when I do ipconfig, my desktop also recognize that it has been provided a local ip address. I have concluded that this is a hardware issue and the only solution to fix this is to by a network card adapter, but I am wondering if anyone has any solutions that could explain why this happen, why my mac address is 01-23-45-67-89-ab, and is there any way to fix it without buying a new network card? Thanks in advance.

    Read the article

  • Ram configuration 4x4 or 2x8?

    - by Carl B
    I am looking to upgrade my Ram to 16gigs and I am wondering if there is any distinct advantage of the way I do it. That being a 4x4 or 2x8 set up. In all my searching there have been a number of pros for each profile. I can find no benchmarck results for either setup as a compaison. So, if there are 2 profiles of the same speed, same voltage, same timing and same cas what would perform better or have a better over all benafit? A few examples from my search - a 4x4 set would lend a benafit in that if one stick failed, you only lose 25% of your Ram vs 50% in a 2x8 set up. a 2x8 set up would have less strain on the memory controler and motherboard. a 2x8 would generate less heat. a 2x8 set up is easier to over clock (not part of my need, but alot of the comparisons circled around the overclocability ease of the 2 stick set up). There is one outstanding benafit that I have found in at least the target company I have looked at and that is price. The 2x8 is nearly half the cost. My motherboard supports a max of 16 gigs and I have a 64 bit OS. Has anyone seen any performance comparisons or is 16 gb just 16 gb no matter how you slice it? And is there any merit to the above pros? Edit: as per the mobo specs - Main Memory • Supports four unbuffered DIMM of 1.5 Volt DDR3 800/1066/1333/1600*/1800*/2133* (OC) DRAM, 16GB Max

    Read the article

  • Hard Disk based storage library

    - by Ryan M.
    We have a Tandberg T24 tape device to handle all of our long term backups right now. We decided that we're not backing up nearly everything that we would like to and that we still have a lot of vulnerabilities. To get to where we want to be, we're going to have to back up a lot more servers than we're currently doing. All of our internal servers have some sort of directly attached drive (I.e. LaCie Raid box or a simple portable hard drive) doing backups, but what we want to do is get those backups off-site. The current tape drive is directly attached via SCSI to a Windows Server 2008 File Server. So to back up anything to tape, it has to be funneled through the File Server. With the current increase that we have planned, I don't think that funneling everything through the File Server is the right course of action and I'm thinking that maybe a second backup device would be more appropriate. I would like your input on a couple of ideas. 1) Doing HDD instead of tape. Tape is hard to deal with. We have a regular rotation cycle, so they don't need years and years of shelf life, so I'm wondering if something HDD-based would be better. 2) Something accessible over the network. Instead of having the device directly attached to one specific machine, have it available to all the servers over the network. Our File Server is a 12-disk raid 6 set up.. I was thinking something like that, but with no raid involved, all disks are stand alone so they can be used/installed/removed on an individual basis. Does any such thing exist? Thanks for your ideas. I'm really interested to hear about some of the solutions you guys are using..

    Read the article

  • Is this a legitimate registry key? (windows 7)

    - by Keyes
    In hkey_local_machine/software/classes I found some registry keys named msime.taiwan, msime.japan and a couple others with similar names, except with a number at the end of, so there was 4 keys altogether. From what I know itmcoulc be associated with a thing in windows that lets you write japanese characters or whatever. I also found a macaffee page, , which seemed dated but it said the key is created by a virus named w32 virut. Just wondering is this a legit key? I found it on another pc and both pcs show when exported to a .txt show it was last written to in 2009. Here is the reg query for the 4 keys. (added lines to differentiate them.) HKEY_LOCAL_MACHINE\Software\Classes\MSIME.Japan (Default) REG_SZ Microsoft IME (Japanese) HKEY_LOCAL_MACHINE\Software\Classes\MSIME.Japan\CLSID (Default) REG_SZ {6A91029E-AA49-471B-AEE7-7D332785660D} HKEY_LOCAL_MACHINE\Software\Classes\MSIME.Japan\CurVer (Default) REG_SZ MSIME.Japan.11 HKEY_LOCAL_MACHINE\Software\Classes\MSIME.Japan.11 (Default) REG_SZ Microsoft IME (Japanese) HKEY_LOCAL_MACHINE\Software\Classes\MSIME.Japan.11\CLSID (Default) REG_SZ {6A91029E-AA49-471B-AEE7-7D332785660D} HKEY_LOCAL_MACHINE\Software\Classes\MSIME.Taiwan\CLSID (Default) REG_SZ {F407D01A-0BCB-4591-9BD6-EA4A71DF0799} HKEY_LOCAL_MACHINE\Software\Classes\MSIME.Taiwan\CurVer (Default) REG_SZ MSIME.Taiwan.8 HKEY_LOCAL_MACHINE\Software\Classes\MSIME.Taiwan.8 (Default) REG_SZ IMTCCORE HKEY_LOCAL_MACHINE\Software\Classes\MSIME.Taiwan.8\CLSID (Default) REG_SZ {F407D01A-0BCB-4591-9BD6-EA4A71DF0799}

    Read the article

  • rsync per-site configuration file?

    - by Scott
    I know how to configure a per-site entry for ssh, but is there any kind of a client configuration for rsync that allows per-site configuration options and aliases or similar shortcuts like the .ssh/config? I'm curious because I have a minimal ssh server installed on my android phone and I also have a minimal rsync tool on it as well. I'm getting tired of having to root login onto the phone and sym-link both tools to standard places the android OS looks for executables as the ssh server is bare bones and has a typical *bear multi-link binary for the basic unix commands (that does not include rsync) I end up having to include --rsync-path=/path/to/rsync/android/files/rsync every time I want to do any rsyncing of the files on my phone, but this path is always the same. I've gotten around it in the meantime with a glob approach in a shell script wrapper, but this sometimes limits the customization I can do with the rsync call. I'm just wondering if there is anything similar to the .ssh/config file where I can create an alias for my phone (e.g. 'android') where specifying rsync android:/mnt/sdcard will automatically assume --rsync-path=/blah/blah/blah --no-g --no-p --no-t etc. Tre`

    Read the article

  • troubleshooting really slow login on a (linux) machine

    - by Peeter Joot
    Within the last couple of weeks, any attempt to login to a specific linux server has gotten really slow. Once I've logged in, things appear to run without significant delay, but some other login like activities (like starting a new screen session) are slow. The machine's been rebooted a couple of times recently and that hasn't helped. , and it doesn't appear to be $PATH search (where $PATH can sometimes include bad NFS mounts), which I've seen historically in our environment. I've also tried completely removing my .profile/.bash*/... type of init files to rule out anything bad there. I also see slow login for at least one other userid on the system. One thing I've noticed is the following message when trying to exit from a screen terminal: Utmp slot not found -> not removed and am wondering if this is related (having a vague recollection that Utmp has something to do with login). Any idea what that message means, or how to fix it, and if it would be related? Failing that, what sort of problem determination tools are available to investigate what is slowing down this login process?

    Read the article

  • Can I clone my hard drive to an external and boot from the clone?

    - by willbuntu
    First thing: I am not asking what software I'm supposed to use. I already know the answer: Ghost (proprietary), Clonezilla, and dd (if I'm careful). What I really want to know is if it is possible to (essentially) bit-for-bit clone my entire installation (OS, installed software, activation(s), etc.) to an external USB hard-drive, and then boot off of that (if I need to, I know how to edit BIOS settings and use Plop boot manager), and work with it day-to-day as if there was virtually no difference from using my internal HDD now. Again, I'm not asking how to install Windows to an external (because I know I'd need to do some special workaround), I'm asking if I can clone everything and boot off of it. In case you're wondering why I'm going to this trouble: I'm using a Lenovo Essentials laptop that has an unmodifiable partition table (due to recovery crap), and has all 4 of its partitions spoken for (3 primary, one extended, cannot change the extended). Anyway, my thought is that if I can clone everything and boot off of it when I need to, and just have a Linux distro on the internal HDD, then that could work.

    Read the article

  • Hostname vs webpage domain.

    - by Mark
    Hi All, Im just starting to look at deploying a webpage and get into the joy of DNS etc. And im wondering how you set up multiple web-servers all with thier own hostnames/public IP addresses, and yet have them serve up a webpage from one domain. For example, lets say you have a website example.com, and an A record in DNS that points at it's IP address of 1.2.3.4 . You want to have two servers, prod1 and prod2 with some kind of load balancer in front of them for fail over reasons. The way I see it you would want to have the hostnames of these servers as prod1.example.com and prod2.example.com and perhaps loadb.example.com. How would you set up the DNS so this would all work. ie you could ssh to any of the server domains, prod1.example.com, prod2.example.com or loadb.example.com and also just use the www.example.com url to go to the website. And would all these server names be resolvable from the public internet and is that safe? This would be a linux environment, for arguments sake ubuntu, a django framework dynamic website, running in apache 2.2 Cheers Mark

    Read the article

  • How would I change the DocumenRoot on the version of Apache that came pre-installed on my Mac OS X s

    - by racl101
    OK, so I want to take advantage of the Apache server that comes installed on my Mac OS X system (which means, I would like not to have to install my own version of Apache since I might as well tryto use what comes bundled), and as such, I went to change some settings in the configuration file: /etc/apache2/httpd.conf Namely, I changed the these two lines: DocumentRoot "/Users/myusername/Sites" <Directory "/Users/myusername/Sites"> So that they initially pointed to a folder in my Dropbox folder (so I could have my docs sync to my Dropbox): DocumentRoot "/Users/myusername/Dropbox/public_html" <Directory "/Users/myusername/Dropbox/public_html"> That didn't work. So then I figured, ok maybe it was too much to ask to make folder in my Dropbox be my document root. So then I thought, what if I make the Document root another folder of my choosing like so: DocumentRoot "/Users/myusername/dev-sites/public_html" <Directory "/Users/myusername/dev-sites/public_html"> and that didn't work either. After looking within the httpd.conf file for clues it seems that only two directories appear to work as Document root paths for the Apache that comes bundled with Mac OS X: /Users/myusername/Sites (or ~/Sites) and /Library/WebServer/Documents/ But trying to use any other directories didn't seem to work. I would get 403 errors on my browser. I was wondering if there was some other settings to change on the httpd.conf file or any permissions to set to make this work. Any help would be appreciated and many thanks in advance.

    Read the article

  • How to correctly set GNU Screen to display currently running program in hardstatus

    - by johnny_bgoode
    In bash, to display the name of the current program in the GNU Screen hardstatus line takes only two configuration lines. First, tell screen what the end of your prompt normally looks like, and supply a default title for a window when you are sitting at in the shell: shelltitle "$ |bash" Next, place this escape sequence in the PS1 variable, before the characters that normally terminate the prompt '$ ' in this case: \033k\033\\ This technique works, to a point. The hardstatus window title is updated to the name of the currently running program, and then switches back to the default title shortly after execution is finished. One major problem, however, is that this escape string is not escaped itself, causing line-wrapping problems with commands longer than the initial line. This was annoying, so I set out looking for a solution. Turns out, simply escaping the previous escape sequence corrects line wrapping: \[\033k\]\[\033\\\] Great! My hardstatus window title still updates to the name of the currently running program, and now my longer commands wrap to the second line correctly. However, with this new escape sequence in my PS1, screen updates the window title to the actual command I am typing, not simply the name of the current program once it is executed. I am wondering, has anyone gotten this working correctly - i.e. line wrapping and proper updating of the hardstatus window title?

    Read the article

  • Laptop + External Monitor: Possible to Scale Mouse-crossover Position?

    - by Brian Lacy
    I have a laptop with a 16" screen @ 1366x768 resolution, and a 24" external monitor at 1920x1200 resolution. I'm extending the Windows 7 desktop onto the other screen. The 24" is the secondary monitor, and is positioned so the bottom of that screen lines up with the bottom of the 16" screen. Currently, if I move the mouse to the top portion of the 24" screen, then move left, the mouse will stop at the edge of the 24" screen until I move it down far enough to get into the range of the 16" screen. So that's all pretty standard; but I'm wondering if there's any software out there that will allow me to "scale" the mouse-crossover position, such that if I'm at the TOP of the 24" screen and move left, the mouse will cross to the TOP of the 16" screen, whereas if I'm at the BOTTOM of the 24" screen, crossing over will position the mouse at the BOTTOM of the 16" screen. So then, if the mouse cursor is on the 24" screen at (x,y) position (10,600), and I move 20 pixels left, the mouse is now on the 16" screen at position (1356,384). Anyone know of such a solution?? Note: I also use Ubuntu, so if there's a solution to this for X, but not Windows, I'd be interested in that also.

    Read the article

  • How to correctly set GNU Screen to display currently running program in hardstatus

    - by johnny_bgoode
    I posted this question on SuperUser but it's hardly getting any views, so I thought I'd ask here as well. In bash, to display the name of the current program in the GNU Screen hardstatus line takes only two configuration lines. First, tell screen what the end of your prompt normally looks like, and supply a default title for a window when you are sitting at in the shell: shelltitle "$ |bash" Next, place this escape sequence in the PS1 variable, before the characters that normally terminate the prompt '$ ' in this case: \033k\033\\\ This technique works, to a point. The hardstatus window title is updated to the name of the currently running program, and then switches back to the default title shortly after execution is finished. One major problem, however, is that this escape string is not escaped itself, causing line-wrapping problems with commands longer than the initial line. This was annoying, so I set out looking for a solution. Turns out, simply escaping the previous escape sequence corrects line wrapping: [\033k]\[\033\\\] Great! My hardstatus window title still updates to the name of the currently running program, and now my longer commands wrap to the second line correctly. However, with this new escape sequence in my PS1, screen updates the window title to the actual command I am typing, not simply the name of the current program once it is executed. I am wondering, has anyone gotten this working correctly - i.e. line wrapping and proper updating of the hardstatus window title? Thanks!

    Read the article

  • Can I regenerate the rsa key for SSH access to a Cisco router? Or should I completely erase the SSH config?

    - by Josh
    I have a production 2691 that I administer via telnet. I'd like to change that to SSH. Looking at the config, it looks like there have been keys generated in the past. I think the history here is SSH was set up, they had issues connecting, and fell back to telnet. There are a number of crypto entries, including the following: crypto pki trustpoint Gateway-2691.xxx.com enrollment selfsigned subject-name cn=IOS-Gateway-2691.xxx.com revocation-check none rsakeypair Gateway-2691.xxx.com I've also got this going... Gateway-2691#sh ip ssh SSH Disabled - version 1.99 %Please create RSA keys (of atleast 768 bits size) to enable SSH v2. Authentication timeout: 120 secs; Authentication retries: 3 Gateway-2691# My question is simply, can I run crypto key generate rsa again to set it up again? Is there a way to negate or no all of the previous ssh config so that I can start fresh there? I may be asking the wrong questions, as I'm learning here. As for the SSH how-to, I'm sure I can find information in many places. I'm just basically wondering if I need to start fresh, or if I can pick up where the last attempt at SSH config left off.

    Read the article

  • What I should know about memory management?

    - by bua
    first of all: I don't use stackadmin or similar so please don't vote for moving there, I'm reading man top and paper "what every programmer should know about memory ..." I need really simple explanation like for retard ;) Having following top dump: top - 11:21:19 up 37 days, 21:16, 4 users, load average: 0.41, 0.75, 1.09 Tasks: 313 total, 5 running, 308 sleeping, 0 stopped, 0 zombie Cpu(s): 0.4%us, 0.6%sy, 0.9%ni, 96.2%id, 0.1%wa, 0.0%hi, 1.9%si, 0.0%st Mem: 132103848k total, 131916948k used, 186900k free, 54000k buffers Swap: 73400944k total, 73070884k used, 330060k free, 13931192k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3305 tudb 25 10 144m 52m 940 R 6.0 0.0 1306:09 app 3011 tudb 15 0 71528 19m 604 S 3.3 0.0 171:57.83 app 3373 tudb 25 10 209m 93m 940 S 3.0 0.1 1074:53 app 3338 tudb 25 10 144m 47m 940 R 2.7 0.0 780:48.48 app 4227 tudb 25 10 208m 99m 904 S 1.3 0.1 198:56.01 app 8506 tudb 25 10 80.7g 49g 932 S 2.0 39.6 458:31.22 app I'm wondering what is: RES (my expl. physical memory consumption ? see 49GB) VIRT (memory mapped disk to cache? see 80GB) SHR (shared pages?) Swap: (is this cached label - for memory mapped disk into swap cache?) Should sum of RES give MEM: X used? or maybe sum of VIRT?

    Read the article

  • Centralized backup method recommendation for SMEs with various OSes

    - by Akinator
    Hi I was wondering what in your opinion is the "best" method for having "everything" backed-up in the following situation. We are a SMEs with 10 computers in total. Three of those computers are MACs The rest are windows (1 vista, 4 win7 and 2 XPs) I'm very open to what the method should be but you should also consider the follwing: Very limited resources Quite "small" bandwidth (4 MBs for all (download) 0.4 MBs (upload, yep, thats it)- though this might get, a little bit better) One of the main thing to back up would be the mails, considerations: All windows computers use outlook, mainly 2003 There is one mac that uses outlook too (for mac of course - not 2011 yet) We also have to backup the files: Not a huge amount Very few very big files Very organizes (by machine) What I would like is to hear your opinions as to which would be the best method (or combination of methods - preferably one of course) considering. We are not sure what do we need and I'm open to suggestions, though an online (cloud based applications) would be great, remember the the bandwidth is unbearable. Last think to consider, it that we would like to do weekly updates (unless the method is very easy of course). Thanks in advance!! I tried to be as specific as possible, but if anything is needed I'll gladly update, please ask for any clarification needed! Please avoid any answers like upgrade all to windows 7 and throw away your macs :) our's may not be an ideal situation, but it is what it is, and right now, it would be impossible for us to change it for a lot of circumstances.

    Read the article

  • How do people type different languages into computer?

    - by pecker
    Hello, We have English keyboards. I never saw any other keyboard in my life. I've been wondering for a long time. How do people in Korea, China, Russia, Muslim countries and some European countries where English is less known. Do they have keyboards in their native language? I mean are the keyboard directly manufactured in their native language. Or do they use some kind of keyboard mapping softwares to acheive the task. I've been searching in Google images to have a glance at their computers but didn't find any real key pads for computers/smartphones. If they have some non-English keyboard. Then how would they type web URLs? URLs possible in other languages also? If they have to type English URLs then it also means that they need to know English. I've seen in some movies that they have all their softwares, windows have text in their native language. How do they have some different language? I feel lost & confused. If you have any screenshots / pics of such non-english computer please post. I want to see one.

    Read the article

  • rsync --remove-source-files but only those that match a pattern

    - by Daniel
    Is this possible with rsync? Transfer everything from src:path/to/dir to dest:/path/to/other/dir and delete some of the source files in src:path/to/dir that match a pattern (or size limit) but keep all other files. I couldn't find a way to limit --remove-source-files with a regexp or size limit. Update1 (clarification): I'd like all files in src:path/to/dir to be copied to dest:/path/to/other/dir. Once this is done, I'd like to have some files (those that match a regexp or size limit) in src:path/to/dir deleted but don't want to have anything deleted in dest:/path/to/other/dir. Update2 (more clarification): Unfortunately, I can't simply rsync everything and then manually delete the files matching my regexp from src:. The files to be deleted are continuously created. So let's say there are N files of the type I'd like to delete after the transfer in src: when rsync starts. By the time rsync finishes there will be N+M such files there. If I now delete them manually, I'll lose the M files that were created while rsync was running. Hence I'd like to have a solution that guarantees that the only files deleted from src: are those known to be successfully copied over to dest:. I could fetch a file list from dest: after the rsync is complete, and compare that list of files with what I have in src:, and then do the removal manually. But I was wondering if rsync can do this by itself.

    Read the article

  • Why is piping dd through gzip so much faster than a direct copy?

    - by Foo Bar
    I wanted to backup a path from a computer in my network to another computer in the same network over a 100MBit/s line. For this I did dd if=/local/path of=/remote/path/in/local/network/backup.img which gave me a very low network transfer speed of something about 50 to 100 kB/s, which would have taken forever. So I stopped it and decided to try gzipping it on the fly to make it much smaller so that the amount to transfer is less. So I did dd if=/local/folder | gzip > /remote/path/in/local/network/backup.img.gz But now I get something like 1 MB/s network transfer speed, so a factor of 10 to 20 faster. After noticing this, I tested this on several paths and files and it was always the same. Why does piping dd through gzip also increase the transfer rates by a large factor instead of only reducing the bytelength of the stream by a large factor? I'd expected even a small decrease in transfer rates instead, due to the higher CPU consumption while compressing, but now I get a double plus. Not that I'm not happy, but just wondering. ;)

    Read the article

  • New AD user request form and workflow

    - by user66390
    I'm wondering if anyone is providing a solid solution for creating New Network User Account Request forms, and attaching workflows to them to automate account creation? I'm currently investigating a number of options, but am surprised that such a ubiquitous task hasn't been solved a dozen times over and thoroughly documented. Or at least isn't integrated into current off-the-shelf change management and ticketing systems. Ideally, I'd like for our current ticketing system, ServiceDesk+ to present a standard 'New User' form to department heads, which they can fill in with the required new user details. This triggers a workflow that submits the request as a ticket that can be reviewed and actioned. Actioning the ticket triggers a workflow that creates a user in AD with the details provided, and notifies the department head upon completion. All told, a pretty standard requirement that I'm sure most organizations have. What are other people doing to accomplish this? Edit: I should add, I'm more looking for "supported" methods. As is, I've submitted a number of scripted solutions, none of which have met with manager approval.

    Read the article

  • ms excel 2010 in windows xp - when open workbook the data is formatted differently than when i saved it

    - by Justin
    I haven't been able to find an answer to this. I have multiple files that I use regularly in excel that now have cell formats of "date". Every single cell in the entire workbook (all sheets) is now formatted as "date". The problem is that I lost my formatting for percents, numbers years, etc and now everything is converted to date (xx/xx/xxxx). I am able to open previously saved versions of a file (prior to me having the problem) and the cells are formatted as I intend them to be (percents, numbers, general, as well as dates). Since this has happened on a couple different files recently, I am wondering how this is happening and how do I prevent it from happening in the future. I cannot cure the problem just by highlighting the entire sheet and converting back to general because I lose all my percents and number formatting. Example (Correct formatting): Month Year Working Days MTD POS Curr Rem May 2012 22 0 1,553,549 June 2012 22 0 1,516,903 June 2011 22 0 1,555,512 June 2010 22 0 1,584,704 Example (Incorrect formatting): Month Year Working Days MTD POS Curr Rem June Tuesday, July 04, 1905 Wednesday, January 04, 1900 Wednesday, January 18, 1900 213,320 July Tuesday, July 04, 1905 Wednesday, January 04, 1900 Monday, January 16, 1900 314,261 July Monday, July 03, 1905 Wednesday, January 04, 1900 Sunday, January 15, 1900 447,759 July Sunday, July 02, 1905 Wednesday, January 04, 1900 Monday, January 16, 1900 321,952 Sorry for the mess. Any suggestions?

    Read the article

  • How would I write a terminal command to download a folder with wget from a Media Temple (gs) server?

    - by racl101
    I'm trying to download a folder using wget on the Terminal (I'm usin a Mac if that matters) because my ftp client sucks and keeps timing out. It doesn't stay connected for long. So I was wondering if I could use wget to connect via ftp protocol to the server to download the directory in question. I have searched around in the internet for this and have attempted to write the command but it keeps failing. So assuming the following: ftp username is: [email protected] ftp host is: ftp.s12345.gridserver.com ftp password is: somepassword I have tried to write the command in the following ways: wget -r ftp://[email protected]:[email protected]/path/to/desired/folder/ wget -r ftp://serveradmin:[email protected]/path/to/desired/folder/ When I try the first way I get this error: Bad port number. When I try the second way I get a little further but I get this error: Resolving s12345.gridserver.com... 71.46.226.79 Connecting to s12345.gridserver.com|71.46.226.79|:21... connected. Logging in as serveradmin ... Login incorrect. What could I be doing wrong?

    Read the article

  • I need to build a mini computer lab with 6 PCs

    - by chiurox
    Hello everyone, This weekend I need to build a mini computer lab with 5-6 PCs. The purpose is for a small computer class that will be taking place. They're mostly going to be doing office and daily kind of stuff, so nothing high performance. However, I hope it will last at least for 2 more years. I already have some parts for 2 PCs, one is a P4 3.0 and another is Celeron 2.4GHz, both socket 478. For the other 4 PCs, I'm wondering if I should buy individual parts and put it all together or get workstations like those Dell Optiplex with P4s. To put things into perspective, I am not currently in the US. I'm in South America and prices here are ridiculously. Another really important thing is that I need to share an internet connection between a total of 8 PCs at this place. Right now I only have a crappy wireless router, should I get another one? Or go with switch hub? I'm not experienced in this matter. Thanks guys!

    Read the article

  • Puppet: variable overriding best practices

    - by rvs
    I'm wondering what are best practices for overriding variables in puppet. I'd like to have all my nodes (which located in different places, some of them are qa, some live) with same classes. Now I have something like this: class base_linux { <...> # something which requires $env and $relayhost variables } class linux_live { $relayhost="1.1.1.1" $env = "prod" include base_linux } class linux_qa { $relayhost="2.2.2.2" # override relayhost include base_linux } class linux_trunk { $env = "trunk" # override env inlude linux_qa } node "trunk01" { include linux_trunk include <something else> } node "trunk02" { $relayhost = "3.3.3.3" # override relayhost include linux_trunk include <something else> } node "prod01" { include linux_prod } So, I'd like to have some defaults in base_linux, which can be overrided by linux_qa/linux_live, which can be overrided in higher level classes and node definition. Of course, it does not work (and not expected to work). It does not work with class inheritance as well. Probably, I'll be able to archive using global scope variables, but it does not seems like a good idea to me. What is the best way to solve this problem?

    Read the article

< Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >