Search Results

Search found 56300 results on 2252 pages for 'local working'.

Page 677/2252 | < Previous Page | 673 674 675 676 677 678 679 680 681 682 683 684  | Next Page >

  • How to completely shutdown Ati card

    - by Celso
    I would like to know how do i prevent my Ati card from turning on when i enter on ubuntu 11.10. My bios only lets-me shutdown intel hd card or leave the both on but i want to know if is possbible to completely shutdown without having to access to the bios.( if is possible to turn of without using Vgaswitcheroo even better!) My system is: Acer 3820tg-- intel core i3 350M, 2.26 Ghz L3, Ati Mobility Radeon HD 5470 up to 2138 MB hyper memory, 13,3" HD LED LCD, 4gb DDR3, SSD corsair 60GB sata 2. EDIT: I now know what is missing on the answers! I edited /etc/rc.local file and added the next lines: Sleep 3 echo ON /sys/kernel/debug/vgaswitcheroo/switch echo IGD /sys/kernel/debug/vgaswitcheroo/switch echo OFF /sys/kernel/debug/vgaswitcheroo/switch And then save the file and restart. It should be possible to use only the intel card now. By the way, i didn't blacklisted the radeon driver because doing it make my ati card wake up. (use it at your own risk. i only tested in my system)

    Read the article

  • Two new features in November 2009 CTP

    - by kaleidoscope
    Windows Azure Diagnostics Managed Library: The new Diagnostics API enables logging using standard .NET APIs. The Diagnostics API provides built-in support for collecting standard logs and diagnostic information, including the Windows Azure logs, IIS 7.0 logs, Failed Request logs, crash dumps, Windows Event logs, performance counters, and custom logs. Variable-size Virtual Machines (VMs): Developers may now specify the size of the virtual machine to which they wish to deploy a role instance, based on the role's resource requirements. The size of the VM determines the number of CPU cores, the memory capacity, and the local file system size allocated to a running instance. e.g.: <WebRole name=”WebRole1” vmsize=”ExtraLarge”> Supported values for the ‘vmsize’ are: 1. Small 2. Medium 3. Large 4.       ExtraLarge More information for Diagnostics Managed Library can be found at: http://msdn.microsoft.com/en-us/library/ee758705.aspx   Girish, A

    Read the article

  • Upcoming Events

    - by MOSSLover
    Here is a list of the events I will be speaking at in the next few months (Yes I am a sado-masochist): SharePoint Saturday Atlanta, Saturday, April 17th SharePoint .org Conference, April 18th-20th TEC, April 25th-28th SharePoint Saturday Huntsville, Saturday, May 1st SharePoint Saturday Philadelphia, Saturday, May 8th SharePoint Saturday DC, Saturday, May 15th SharePoint Saturday Ozarks, Saturday, June 13th I am trying to organize a SharePint at TEC after the LA SPUG meeting with Joel on Tuesday, April 27th.  Does anyone know if there is a good spot around the JW Marriott in downtown LA?  Maybe we can get a bunch of local SharePointers to attend. Technorati Tags: SharePoint Events,SharePoint Saturdays,SharePoint Conferences

    Read the article

  • Can only connect to file server on second attempt

    - by Ross Fleming
    I have a FreeNas file server on my local network and I usually connect to it from Windows and Ubuntu computers. Ever since I have upgraded from Ubuntu 12.04 to 12.10 Ubuntu will only connect after a second attempt. By which I mean, I will browse to it via the file manager and once I click on the link in "Bookmarks" it complains that it could not connect. If I then try again it connects successfully and will keep up it's connection until the laptop is suspended or looses connection to the LAN for whatever reason. This isn't much of a problem as I don't mind having to click twice but my real problem is that this means that my scheduled backup will complain that it cannot connect to the storage device if it has not already been accessed during the current session. If there is some way to either stop the issue all together or to force the backup tool (default) to immediately have a second attempt at connecting.

    Read the article

  • How customers view and interact with a company

    The Harvard Business Review article written by Rayport and Jaworski is aptly titled “Best Face Forward” because it sheds light on how customers view and interact with a company. In the past most business interaction between customers was performed in a face to face meeting where one party would present an item for sale and then the other would decide whether to purchase the item. In addition, if there was a problem with a purchased item then they would bring the item back to the person who sold the item for resolution. One of my earliest examples of witnessing this was when I was around 6 or 7 years old and I was allowed to spend the summer in Tennessee with my Grandparents. My Grandfather had just written a book about the local history of his town and was selling them to his friends and local bookstores. I still remember he offered to pay me a small commission for every book I helped him sell because I was carrying the books around for him. Every sale he made was face to face with his customers which allowed him to share his excitement for the book with everyone. In today’s modern world there is less and less human interaction as the use of computers and other technologies allow us to communicate within seconds even though both parties may be across the globe or just next door. That being said, customers view a company through multiple access points called faces that represent the ability to interact without actually seeing a human face. As a software engineer this is a good and a bad thing because direct human interaction and technology based interaction have both good and bad attributes based on the customer. How organizations coordinate business and IT functions, to provide quality service varies based on each individual business and the goals and directives put in place by its management. According to Rayport and Jaworski, the type of interaction used through a particular access point may lend itself to be people-dominate, machine-dominate, or a combination of both. The method by which a company communicates information through an access point is a strategic choice that relates costs and customer outcomes. To simplify this, the choice is based on what can give the customer the best experience interacting with the company when the cost of the interaction is also a factor. I personally see examples of this every day at work. The company website is machine-dominate with people updating and maintaining information, our groups department is people dominate because most of the customer interaction is done at the customers location and is backed up by machine based data sources, and our sales/member service department is a hybrid because employees work in tandem with machines in order for them to assist customers with signing up or any other issue they may have. The positive and negative aspects of human and machine interfaces are a key aspect in deciding which interface to use when allowing customers to access a company or a combination of the two. Rayport and Jaworski also used MIT professor Erik Brynjolfsson preliminary catalog of human and machine strengths. He stated that humans outperform machines in judgment, pattern recognition, exception processing, insight, and creativity. I have found this to be true based on the example of how sales and member service reps at my company handle a multitude of questions and various situations with a lot of unknown variables. A machine interface could never effectively be able to handle these scenarios because there are too many variables to consider and would not have the built-in logic to process each customer’s claims and needs. In addition, he also stated that machines outperform humans in collecting, storing, transmitting and routine processing. An example of this would be my employer’s website. Customers can simply go online and purchase a product without even talking to a sales or member services representative. The information is then stored in a database so that the customer can always go back and review there order, and access their selected services. A human, no matter how smart they are would never be able to keep track of hundreds of thousands of customers let alone know what they purchased or how much they paid. In today’s technology driven economy every company must offer their customers multiple methods of accessibly in order to survive. The more of an opportunity a company has to create a positive experience for their customers, in my opinion, they more likely the customer will return to that company again. I have noticed this with my personal shopping habits and experiences. References Rayport, J., & Jaworski, B. (2004). Best Face Forward. Harvard Business Review, 82(12), 47-58. Retrieved from Business Source Complete database.

    Read the article

  • How to create a reasonably sized urban area manually but efficiently

    - by Overv
    I have a game concept that only really works in an urban area that is of reasonable scale and diversity. In terms of what it should look like, think GTA, in terms of the size think more like a small neighbourhood with residents and a few local shops, perhaps a supermarket. I'm mostly experienced in programming and not at all with modelling, texturing or drawing, but I've found that SketchUp allows me to design interesting looking buildings that I model after real world buildings in my own neighbourhood. Designing these buildings and other objects can take from a few tens of minutes to a few hours. My question is: what is the best approach for a one man army like me who does manage to model buildings to create an interesting city environment in a reasonable amount of time? My game will not be based on procedural generation, the environment will actually be modelled like GTA cities.

    Read the article

  • How to write a network game? [closed]

    - by Tom Wijsman
    Based on Why is so hard to develop a MMO?: Networked game development is not trivial; there are large obstacles to overcome in not only latency, but cheat prevention, state management and load balancing. If you're not experienced with writing a networked game, this is going to be a difficult learning exercise. I know the theory about sockets, servers, clients, protocols, connections and such things. Now I wonder how one can learn to write a network game: How to balance load problems? How to manage the game state? How to keep things synchronized? How to protect the communication and client from reverse engineering? How to work around latency problems? Which things should be computed local and which things on the server? ... Are there any good books, tutorials, sites, interesting articles or other questions regarding this? I'm looking for broad answers, but specific ones are fine too to learn the difference.

    Read the article

  • Using RSYNC to Replicate Synology NAS DS710+ to Windows 7 Hard Drive

    Learn how to use a local backup drive on your windows 7 system to replicate the data on any or all of your directories on your Synology NAS (Network Hard Drive Device) DS710+.  This post will... This site is a resource for asp.net web programming. It has examples by Peter Kellner of techniques for high performance programming...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to debug a server that crashes once in a few days?

    - by Nir
    One of my servers crashes once in a few days. It does low traffic static web serving + low trafic dynamic web serving (PHP, local MYSQL with small data, APC, MEMCACHE) + some background jobs like XML file processing. The only clue I have is that a few hours before the server dies it starts swapping (see screenshot http://awesomescreenshot.com/075xmd24 ) The server has a lot of free memory. Server details: Ubuntu 11.10 oneiric i386 scalarizr (0.7.185) python 2.7.2, chef 0.10.8, mysql 5.1.58, apache 2.2.20, php 5.3.6, memcached 1.4.7 Amazon EC2 (us-west-1) How can I detect the reason for the server crashes ? When it crashes its no longer accessible from the outside world.

    Read the article

  • Library and several small programs that use it: how should I structure my git repository?

    - by Dan
    I have some code that uses a library that I and others frequently modify (usually only by adding functions and methods). We each keep a local fork of the library for our own use. I also have a lot of small "driver" programs (~100 lines) that use the library and are used exclusively by me. Currently, I have both the driver programs and the library in the same repository, because I frequently make changes to both that are logically connected (adding a function to the library and then calling it). I'd like to merge my fork of the library with my co-workers' forks, but I don't want the driver programs to be part of the merged library. What's the best way to organize the git repositories for a large, shared library that needs to be merged frequently and a number of small programs that have changes that are connected to changes in the library?

    Read the article

  • Doing port forwarding and then using it from within the internal network

    - by Ram Rachum
    We all know that by doing port forwarding on the router, computers from outside the network are able, on the specified ports, to access internal computers by targeting the external IP. I'm now replacing a TP-Link router with a D-link VDSL N 6740U router, (and copied over all the settings,) and I've noticed that one thing stopped working: With the TP-link router, you could access those port-forwarded computers from within the network, using the external IP, and they would be forwarded to the relevant computers. With the new D-Link router, it doesn't work. You might be wondering, why would you want to use the external IP and port forwarding when you're inside the internal network anyway and can just access the internal IP? One example for why this is useful: You have an iPhone app that connects to a service on an internal computer. The iPhone app knows to connect to the external IP. When we put that iPhone inside the internal network (via WiFi), it suddenly stops working, because it can't access the service from the external IP anymore. Is it an inherent property of D-Link routers that they do not allow accessing internal servers from inside the network by targeting the external IP? Or is there a way to make it work?

    Read the article

  • High Availability for IaaS, PaaS and SaaS in the Cloud

    - by BuckWoody
    Outages, natural disasters and unforeseen events have proved that even in a distributed architecture, you need to plan for High Availability (HA). In this entry I'll explain a few considerations for HA within Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). In a separate post I'll talk more about Disaster Recovery (DR), since each paradigm has a different way to handle that. Planning for HA in IaaS IaaS involves Virtual Machines - so in effect, an HA strategy here takes on many of the same characteristics as it would on-premises. The primary difference is that the vendor controls the hardware, so you need to verify what they do for things like local redundancy and so on from the hardware perspective. As far as what you can control and plan for, the primary factors fall into three areas: multiple instances, geographical dispersion and task-switching. In almost every cloud vendor I've studied, to ensure your application will be protected by any level of HA, you need to have at least two of the Instances (VM's) running. This makes sense, but you might assume that the vendor just takes care of that for you - they don't. If a single VM goes down (for whatever reason) then the access to it is lost. Depending on multiple factors, you might be able to recover the data, but you should assume that you can't. You should keep a sync to another location (perhaps the vendor's storage system in another geographic datacenter or to a local location) to ensure you can continue to serve your clients. You'll also need to host the same VM's in another geographical location. Everything from a vendor outage to a network path problem could prevent your users from reaching the system, so you need to have multiple locations to handle this. This means that you'll have to figure out how to manage state between the geo's. If the system goes down in the middle of a transaction, you need to figure out what part of the process the system was in, and then re-create or transfer that state to the second set of systems. If you didn't write the software yourself, this is non-trivial. You'll also need a manual or automatic process to detect the failure and re-route the traffic to your secondary location. You could flip a DNS entry (if your application can tolerate that) or invoke another process to alias the first system to the second, such as load-balancing and so on. There are many options, but all of them involve coding the state into the application layer. If you've simply moved a state-ful application to VM's, you may not be able to easily implement an HA solution. Planning for HA in PaaS Implementing HA in PaaS is a bit simpler, since it's built on the concept of stateless applications deployment. Once again, you need at least two copies of each element in the solution (web roles, worker roles, etc.) to remain available in a single datacenter. Also, you need to deploy the application again in a separate geo, but the advantage here is that you could work out a "shared storage" model such that state is auto-balanced across the world. In fact, you don't have to maintain a "DR" site, the alternate location can be live and serving clients, and only take on extra load if the other site is not available. In Windows Azure, you can use the Traffic Manager service top route the requests as a type of auto balancer. Even with these benefits, I recommend a second backup of storage in another geographic location. Storage is inexpensive; and that second copy can be used for not only HA but DR. Planning for HA in SaaS In Software-as-a-Service (such as Office 365, or Hadoop in Windows Azure) You have far less control over the HA solution, although you still maintain the responsibility to ensure you have it. Since each SaaS is different, check with the vendor on the solution for HA - and make sure you understand what they do and what you are responsible for. They may have no HA for that solution, or pin it to a particular geo, or perhaps they have a massive HA built in with automatic load balancing (which is often the case).   All of these options (with the exception of SaaS) involve higher costs for the design. Do not sacrifice reliability for cost - that will always cost you more in the end. Build in the redundancy and HA at the very outset of the project - if you try to tack it on later in the process the business will push back and potentially not implement HA. References: http://www.bing.com/search?q=windows+azure+High+Availability  (each type of implementation is different, so I'm routing you to a search on the topic - look for the "Patterns and Practices" results for the area in Azure you're interested in)

    Read the article

  • Approaching events #mstc11 #ppws #sqlbits

    - by Marco Russo (SQLBI)
    The spring season is always full of events and I’m just preparing for a number of them. First of all, we are getting very good interest for the PowerPivot Workshop in Copenhagen on 21-22 March 2011. Tomorrow (Friday March 4) will be the last day to take advantage of the Early Bird rate for this date. We will also participate to an evening meeting of local user groups on March 21 in Copenhagen, more news about this in the next few days. Other scheduled dates are in Dublin (28-29 March 2011) and in...(read more)

    Read the article

  • How do I query the gvfs metadata for a specific attribute?

    - by Mathieu Comandon
    A nice feature in evince is that when you close the program and later reopen the same pdf, it automatically jumps to the page you were reading. The problem I have is that I often read ebooks on several computers and I have to find were I was on the last computer I was reading the pdf. I think syncing these bookmarks in UbuntuOne would be a killer feature for people like me who read pdfs on different computers. By investigating a bit, I found where evince was storing this data, it's in the gvfs metadata and it can be accessed for a particular document by typing gvfs-ls -a "metadata::evince::page" myEbook.pdf Rather that querying a particular file, I'd like to query the whole metadata file (located in ~/.local/share/gvfs-metadata/home for the home directory) for any file where this particular attribute is set to some value. The biggest issue is that gvfs metadata and stored in binary files and we all know it's not easy to get something out of a binary file. So, do you know any way to query the gvfs metadata for some attribute?

    Read the article

  • Why might my Fedora 15 live USB persistent storage not work?

    - by Richard J Foster
    I created a Fedora 15 "live" USB stick using the live USB creator found at https://fedorahosted.org/liveusb-creator/ and the Fedora 15 i686 Desktop ISO image with the persistent storage space set to 4096MB. (The USB stick I have available has an 8GB capacity, so there should be plenty of space.) Fedora appears to boot correctly, however it seems that the persistent storage is not working. To verify this, I opened a terminal prompt, then did su - followed by yum update yum. As expected, I was informed that a new version was available. (The live CD contains version 3.2.29-4, at the time of typing 3.2.29-6 is the current version). After installing, I verified that the new version was installed by typing yum --version. I then shutdown the system using shutdown now. After the system had shut down, I rebooted and returned to the terminal prompt. On typing yum --version, I was informed that the version was 3.2.29-4 (i.e. the original version). Why might the persistent storage not be working? Is there anything I can do to fix it?

    Read the article

  • Is TortoiseSVN really this buggy?

    - by John Isaacks
    I have been using tortoise svn for a couple weeks now. I get errors very often. Almost everything I do creates an error. this is with repositories on the internet, locally on my machine or a machine on the network. So I started to keep track. Some examples are below. 12/31/2010 Can't move 'C:\Users\jisaacks\Desktop\my branch test.svn\tmp\entries' to 'C:\Users\jisaacks\Desktop\my branch test.svn\entries': The file or directory is corrupted and unreadable. 01/04/2011 Commit failed (details follow): Server sent unexpected return value (405 Method Not Allowed) in response to MKCOL request for '/svn/kranichs-svn/!svn/wrk/b316f15e-0869-4644-9c53-87aa0103506b/branches' 01/06/2011 Can't move 'C:\Users\jisaacks\Desktop\DVD Catalog\vendors.svn\tmp\entries' to 'C:\Users\jisaacks\Desktop\DVD Catalog\vendors.svn\entries': The file or directory is corrupted and unreadable. 01/06/2011 Can't move 'C:\Users\jisaacks\Desktop\DVD Catalog\cake\tests\test_app\views\layouts.svn\tmp\entries' to 'C:\Users\jisaacks\Desktop\DVD Catalog\cake\tests\test_app\views\layouts.svn\entries': The file or directory is corrupted and unreadable. 01/06/2011 Commit failed (details follow): attempt to write a readonly database attempt to write a readonly database That last one about the read only database happens every time I commit. Say if I am working on the head revision (7) in a working copy. I make a change and commit it. It gives me this error. But if I look at the log it tells me that there is now a revision 8 (the commit I just made) but I am still on revision 7. So I need to run update to be on the current revision that I just commited. I hope I explained that clearly. Anyways with all these errors I wonder.. Is TSVN just this unstable, does everyone have these issues. Or is it just me? If just me, what could I be doing wrong?

    Read the article

  • PHP cannot connect to MySQL

    - by yogal
    Hello, I recently installed Apache 2 + PHP 5.3.1 + MySQL 5.1.44 on my Windows 7 64bit machine following this guide: http://sleeplessgeek.blogspot.com/2010/01/setting-up-apache-php-mysql-phpmyadmin.html It all went fine, php is working great (even with XDebug) but I cannot connect to mysql server. A simple script I wrote to test connection (yes, root has no pass): $username = "root"; $password = ""; $database = "test"; $hostname = "localhost"; $conn = mysql_connect($hostname, $username, $password) or die("Unable to connect to MySQL Database!!"); It prints this error after 60sec of timeout: Warning: mysql_connect() [function.mysql-connect]: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. I can connect to mysql using cmdmysql -h localhost -u root Services are working properly. There also seems to be a problem with PhpMyAdmin (using 3.2.5). As soon as I type user and pass the page loads and turns blank (content-lenght in headers is 0 but status code is 302 Found). Looks like something wrong with cookies (my auth method). I hope someone has a clue, it has to be something dumb simple I missed. Thanks in advance.

    Read the article

  • How to move an object using X and Y coordinates in JavaScript

    - by Geroy290
    I am making a 2d game with JavaScript and HTML5 and am trying to move an image that I have drawn with JavaScript like so: //canvas var c = document.getElementById("gameCanvas"); var ctx = c.getContext("2d"); //baseball var baseball = new Image(); baseball.onload = function() { ctx.drawImage(baseball, 400, 425); }; baseball.src = "baseball2.png"; I'm not sure how I would move it though, I have seen many people seem to just type something like ballX and ballY but I don't understand where the actual x and y definition comes from. Here is my code so far: http://jsfiddle.net/xRfua/ I have a different image source but it is a local source so I couldn't include it. Thanks in a dvance for any help!

    Read the article

  • permission denied errors while starting namenode in hadoop 2.2.0

    - by Riddle
    I recently installed hadoop2.2.0 on my desktop running : Linux livingstream 3.2.0-29-generic #46-Ubuntu SMP Fri Jul 27 17:03:23 UTC 2012 x86_64 x86_64 x86_64 GNU/Lin I am able to format the namenode just fine , however when I start the namenode using the following command: hadoop-daemon.sh start namenode I get the following permission errors , can anyone please help me with these errors: hduser@livingstream:/usr/local/hadoop$ hadoop-daemon.sh start namenode mkdir: cannot create directory `/var/run/hadoop': Permission denied starting namenode, logging to /var/log/hadoop/hduser/hadoop-hduser-namenode- livingstream.out /usr/sbin/hadoop-daemon.sh: line 138: /var/run/hadoop/hadoop-hduser-namenode.pid: No such file or directory Can you please help me with these errors. Best Regards, Ishan

    Read the article

  • Azure Full trust permissions

    - by kaleidoscope
    Under Windows Azure full trust, your role has access to a variety of system resources that are not available under partial trust File System Resources A role running in Windows Azure has permissions to read and write to certain file, directory, and volume resources on the server. These permissions are outlined in the following table.  File system resource Permission System root directory No access Subdirectories of the system root directory No access Windows directory Read access only Machine configuration files No access Service configuration file Read access only Local storage resource Full access Registry Resources The following table outlines permissions available to the role when accessing the registry while running in Windows Azure. HKEY_CLASSES_ROOT Read access HKEY_CURRENT_USER No access HKEY_LOCAL_MACHINE Read access HKEY_USERS Read access HKEY_CURRENT_CONFIG Read access More details can be found at: http://msdn.microsoft.com/en-us/library/dd573363.aspx   Amit, S

    Read the article

  • SQL Server Express Profiler

    - by David Turner
    During a recent project, while waiting for our Development Database to be provisioned on the clients corporate SQL Server Environment (these things can sometimes take weeks or months to be setup), we began our initial development against a local instance on SQL Server Express, just as an interim measure until the Development database was live.  This was going just fine, until we found that we needed to do some profiling to understand a problem we were having with the performance of our ORM generated Data Access Layer.  The full version of SQL Server Management Studio includes a profiler, that we could use to help with this kind of problem, however the Express version does not, so I was really pleased to find that there is a freely available Profiler for SQL Server Express imaginatively titled ‘SQL Server Express Profiler’, and it worked great for us.  http://sites.google.com/site/sqlprofiler/

    Read the article

  • Pointing domain away from google

    - by redbeard
    I've already asked this question on google apps support, didnt fet an answer yet, and I am in a bit of a hurry. Need to have that website up by monday. Question: We have a domain registered at 101domain, we used a domain just for google apps, for email, and some services, and domain was never used for website. Now we pointed domain to the local hosting service, set everything like it should be, and pointed back to google from hosting service for gmail and gmail works perfectly. Problem is that we tried to set up a website at hosting service but domain still points to google apps login. Everything looks like it should, we recently bought 2 more domains that are set up the same (we use google apps on them too), the only difference tha I can think of is that we didn't use for apps first. Could this be because CNAME lags behind DNS server change?

    Read the article

  • GDD-BR 2010 [1H] VC Panel: Entrepreneurship, Incubation and Venture Capital

    GDD-BR 2010 [1H] VC Panel: Entrepreneurship, Incubation and Venture Capital Speakers: Don Dodge, Eric Acher, Humberto Matsuda, Alex Tabor Track: Panels Time slot: H [17:20 - 18:05] Room: 1 Startups can be built and funded anywhere in the world, not just Silicon Valley. Venture Capital investors are investing in startups globally, and funding incubators to hatch their future investments. Find out how you can get into an incubator, or funded by a Venture Capitalist or Angel Investors. Learn from examples in the USA and hear from local VC investors in this panel discussion. Get your questions answered by real investors. From: GoogleDevelopers Views: 6 0 ratings Time: 37:39 More in Science & Technology

    Read the article

  • XCOPY access denied error on My Documents folder

    - by Ryan M.
    Here's the situation. We have a file server set up at \fileserver\ that has a folder for every user at \fileserver\users\first.last I'm running an xcopy command to backup the My Documents folder from their computer to their personal folder. The command I'm running is: xcopy "C:\Users\%username%\My Documents\*" "\\fileserver\users\%username%\My Documents" /D /E /O /Y /I I've been silently running this script at login without the users knowing, just so I can get it to work before telling them what it does. After I discovered it wasn't working, I manually ran the batch script that executes the xcopy command on one of their computers and get an access denied error. I then logged into a test account on my own computer and got the same error. I checked all the permissions for the share and security and they're set to how I want them. I can manually browse to that folder and create new files. I can drag and drop items into the \fileserver\users\first.last location and it works great. So I try something else to try and find the source of the access denied problem. I ran an xcopy command to copy the My Documents folder to a different location on the same machine and I still got the access denied error! So xcopy seems to be denied access when it tries to copy the My Documents folder. Any suggestions on how I can get this working? Anyone know the reason behind the access denied error?

    Read the article

  • How do I use key combinations on an axis on a joystick in xorg?

    - by valadil
    I'm using xserver-xorg-input-joystick on Debian Stable so I can use a joystick in place of the mouse. I have mouse movement working correctly, but got stuck trying to add functions for some other keys. These work: #Left stick #Pointer Option "MapAxis1" "mode=relative axis=1.5x" Option "MapAxis2" "mode=relative axis=1.5y" #Right stick #Arrow keys Option "MapAxis4" "mode=relative keylow=Left keyhigh=Right" Option "MapAxis5" "mode=relative keylow=Up keyhigh=Down" But when I try to make key combos (so I can navigate windows and screens in xmonad) I have no luck. #dpad #xmonad focus #up/down toggle window. l/r choose screen. Option "MapAxis8" "mode=relative keylow=Super_L,k keyhigh=Super_L,j" Option "MapAxis7" "mode=relative keylow=Super_L,w keyhigh=Super_L,e" I've also tried Super_R, plain old Super, Meta, and mod4mask, and anything else I can think of. These buttons print the letter, but don't appear to hold down the modifying key. The exception to that is shift. If I specify Shift_L or Shift_R, I get a capital letter. xev indicates that modifier keys are being pressed. If I lower Axis8, I get press Super_L, press k, release k, release Super_L. That looks like it should be working. Maybe this is an xmonad problem and not a joystick driver one? I'm also having trouble with getting an axis to use other XF86 keys: # triggers # song selection Option "MapAxis3" "mode=relative keylow=none keyhigh=XF86AudioForward" Option "MapAxis6" "mode=relative keylow=none keyhigh=XF86AudioBack" That does nothing. Any idea why? If it turns out that this isn't something I can do on an axis, but would work with a button, is there a way to treat my joysticks as buttons? Also, if anyone has suggestions for the other 5 buttons I'll have left after mouse buttons are bound, I'm listening.

    Read the article

< Previous Page | 673 674 675 676 677 678 679 680 681 682 683 684  | Next Page >