Search Results

Search found 10695 results on 428 pages for 'none'.

Page 294/428 | < Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >

  • WWNs,WWPNs and Fibre Channel addresses

    - by user238230
    Lots of contradictory on these subjects and I don't know why. My first question is about the 64 bit WWN. One reference claims the terms WWN and WWPN are synonymous. An online source seems to refute this. They say: A WWPN (world wide port name) is the unique identifier for a fibre channel port where a WWN (world wide name) the unique identifier for the node itself. A good example is a dual port HBA. There will be two WWPN's (one for each port) and only a single WWN for the card itself. Question #1: Which is correct? I’m almost positive I read that every “Port” has a WWN. My next question is about the 24 bit FC address that is dynamically allocated to a port when it is introduced to the switch. The Domain ID field is defined as: "a unique number provided to each switch in the fabric." Question #2: Do Domain IDs only apply to switch ports? For example what would the Domain ID be for a HBA? None? The same as the switch port it is connected to? Question #3: My last question is about the Name Server of a switch. A book example shows the routing of a message through the switch. It uses the WWNs of the source and destination ports to route the message. I am assuming that the Name Server must associate the WWN and the FC address in some way in order to route the message, correct?

    Read the article

  • Laptop shuts down randomly without warning

    - by Robert P.
    My Asus Zenbook UX32V turns off randomly when I'm working on it. This happens both when the computer is recently turned on (5 minutes), and after being on for several days. I'm not running any heavy software The laptop is not heating The fan is not working on the maximum capacity (it's not heating) It happens when the laptop is lying still on the table It is no warning, it simply goes black It happens both when charging and on battery My guess is that it suddenly lose power somehow. What puzzles me is that I can flip the laptop upside down, sideways, shake it, etc. without it shutting off. This makes me think it's not something that's loose causing occasional short-circuits. I realize that the laptop probably doesn't like flipping and shaking, but it was the best way I could troubleshoot. I rarely turn the computer off, only have it in hibernate or sleep mode (most often hibernate). I've never experienced that the laptop is off when I wake it up from sleep mode. I've had the problem for a few months and it happens 2-8 times a week. Specs: Asus Zenbook UX32V Windows 8.1 (it happened in Windows 8.0 too) Intel i5-3317U CPU @ 1.70GHz The laptop is approx 1.5 years, but it has a small dent on one of the sides that probably voids the warranty. The dent has been there since week one and I don't think it's related to the problems I'm having now. Does anyone have a clue what might cause this, and how it might be fixed? I've read all other questions (some of which are listed below) that seem related to my issue, but none report the same behavior as I'm experiencing. Most report heavy games, heating etc. Asus N53J Laptop randomly shuts down Laptop is randomly shutting off Computer shuts down without warning My laptop acer aspire 5720 suddenly turn off randomly Computer randomly shutting down Windows 8.1 randomly shuts self down ASUS K55VM Laptop unexpectedly shuts down

    Read the article

  • Isolate clients on same subnet?

    - by stefan.at.wpf
    Given n (e.g. 200) clients in a /24 subnet and the following network structure: client 1 \ . \ . switch -- firewall . / client n / (in words: all clients connected to one switch and the switch connected to the firewall) Now by default, e.g. client 1 and client n can communicate directly using the switch, without any packets ever arriving the firewall. Therefore none of those packets could be filtered. However I would like to filter the packets between the clients, therefore I want to disallow any direct communication between the clients. I know this is possible using vlans, but then - according to my understanding - I would have to put all clients in their own network. However I don't even have that much IP addresses: I have about 200 clients, only a /24 subnet and all clients shall have public ip addresses, therefore I can't just create a private network for each of them (well, maybe using some NAT, but I'd like to avoid that). So, is there any way to tell the switch: Forward all packets to the firewall, don't allow direct communication between clients? Thanks for any hint!

    Read the article

  • Recommended offline on-demand virus scanners

    - by ashh
    I have never run full anti-virus on my Windows XP systems. Instead I use various anti-malware tools to manually perform scans every few weeks. This approach, combined with Windows updates and general care about what web-sites I visit and what files I download has kept me 99% free of problems. The remaining 1% has occurred when I download files that I know may contain malware, but still decide the risk is worth it. When on 2 occasions in 10 years I did get caught doing this, I realised that being able to easily scan them would most likely have avoided getting infected. I don't need, or want, to run a "stay resident" anti-virus. Also, the online scanners such as Kaspersky etc limit uploads to small files, so these are not always useful. In summary I would like to simply be able to download a file and then manually initiate an on demand anti-virus scan, on the downloaded file only. I'm sure some/most Anti-Virus do both, however once again I don't really want to pay for or need the stay resident part. Any recommendations (commercial or free)? UPDATE: This is not an exact duplicate, nor a possible duplicate. I searched for and read other questions on anti-virus here at SuperUser and found none that answered my question. I am specifically asking about anti-virus scanners that run ON-DEMAND locally on the computer, not online scanners.

    Read the article

  • How to automatically restart Apache service after HTTP 503 error?

    - by Gnanam
    Our production server is running Apache v2.2.4 on CentOS5.2. Mono v1.2.4 is integrated within Apache. Recently, we faced a problem in our production server. From Apache's access_log, I found a HTTP 500 internal server error for one of the HTTP request and all subsequent HTTP requests also failed but with HTTP 503 service unavailable error. From thereafter, none of the requests were successful. Also, only later some time, we realized that our application was not working because of this error and then we restarted Apache service. My questions are, in this kind of situation, how do I automatically restart Apache service when HTTP 503 error is encountered? Is there any Apache directive available to set? in general, what would cause a HTTP 503 error in Apache? NOTE: Mono helps in running applications developed in .NET on a Linux-based OS. EDIT: I agree on finding the root cause of this problem. In fact, we've been analyzing that too. Till we resolve it, am finding whether this could be restarted immediately on its own without having any downtime/service disruption for application users.

    Read the article

  • New to building computers worried about temps

    - by dave
    I'm new to building my own computers and I was wondering about maximum temperatures. I understand that the room temp can affect the computers temp but how relevent is it? I understand that if my room temp is 20°C none of my computer parts could be lower than that. But if my room is 27°C instead of 20°C would this cause my computers parts to heat up more/faster? My new computer I built myself for gaming is i7 2600k 16gb ram ddr3 1600 hd6970 2 gb 240gb ssd ( bought a nas with 3 2tb drives in raid 5 for my home network ) 850w modular psu I also have my old hp computer i3 2120 8gb ram hd6770 1tb hdd I also have 3 laptops in my household, but I am not worried about their temps, they heat up my legs but they are never under stress. Due to size and money reasons I used an old case and it only has one of the sides left on it. Is this bad for the computer and will the extra dust cause problems? Or should I leave it this way or take the missus wrath and buy a case? If so is there any certain case I should get? I don't care about looks I just want card reader and usb slots and for it to run as cool or cooler than now, my case has 1 fan. Also what are the max temps for my new and old computer parts? Is 40°C under load ok for my CPU, what about 70°C for my GPU is that ok too, or should I worry? What are normal and safe temps for my components? I have looked around but there seem to be lots of different answers. I know that 100°C is bad but I want my parts to last as long as possible and this site always seems to give good replies without arguing or flaming.

    Read the article

  • SQL Server architecture - they want to move my database to new instance...Why?

    - by O'MALLEY
    Our current production database environment contains about 10 similarily managed databases. Our agency has just purchased and is installing new blade chasses and wants to move my database to a new instance (leaving the other 9 on another). This decision is being driven by one of our IT staff, not a DBA. I am a project manager, not a DBA but I know enough to not necesarrily have a good feeling about this decision and I am urging our IT department to make a sound decision based on what is best for the database. Our IT department has stated that it is not good to have all our eggs in one basket, and has also stated that my database contains "regulatory data" so it should be on its own instance. A couple of truths: - None of the databases on the current instance are OLTP databases nor are any of them data warehouses - My database currently has joins/views to a couple of the other databases in the production environment So my questions are as follows: Am I wrong to disregard a statement about eggs in baskets? (hello, this is why we have maintenance plans/disaster recovery plans). I'll mention that other databases also have regulatory data too. What types of questions do I need to ask to determine if this is a sound decicion? (A DBA friend mentioned that if the service level agreement of said database does not radically differ from the others then why do they want to do this?) I have done some research on linked servers. What arguments should I bring forth about the fact that I have views setup that rely on data from other DBs currently?

    Read the article

  • Time sync fails on Hyper-V VM, but succeeds when I log in as a domain user

    - by Richard Beier
    We have a Windows Server 2003 SP2 VM running on Hyper-V (Server 2008 R2 host). The VM has Hyper-V time synchronization enabled. I noticed that the time on the VM was fast by around 25 minutes. I saw the following in the event log: The time provider NtpClient is configured to acquire time from one or more time sources, however none of the sources are currently accessible. No attempt to contact a source will be made for 15 minutes. NtpClient has no source of accurate time. The time provider NtpClient cannot reach or is currently receiving invalid time data from ourdc.ourdomain.local (ntp.d|192.168.2.18:123-192.168.2.2:123). Time Provider NtpClient: No valid response has been received from domain controller ourdc.ourdomain.local after 8 attempts to contact it. This domain controller will be discarded as a time source and NtpClient will attempt to discover a new domain controller from which to synchronize. I had been logged in as a local user. (We have an old app that runs on this VM - it requires a user to be logged in at all times, and we use a non-domain user account for this.) When I logged in as a domain user, the clock almost immediately corrected itself. Running "w32tm /monitor" and "net time" as the domain user showed no errors, and indicated that our domain controller was the time source. Does anyone know what might cause this, and why logging in under a domain account fixes the problem? I'm wondering if the time will start to drift again. Thanks for your help, Richard

    Read the article

  • RHEL Java Application returns "No space left on device" but only 3% used

    - by FiveO
    My Java Application returns following Exception when saving a new file in /opt/wso2 on a CentOS 6.4: Caused by java.io.FileNotFoundException: ... (No space left on device) Caused by: java.io.FileNotFoundException: /opt/wso2/FrameworkFiles/trk_2014062500042488825_TRCK_PatfallHospis_pFromHospis_66601fb3-a03c-4149-93c3-6892e0a10fea.txt (No space left on device) at java.io.FileOutputStream.open(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:212) at java.io.FileOutputStream.<init>(FileOutputStream.java:99) at com.avintis.esb.framework.adapter.wso2.FrameworkAdapterWSO2.sendMessages(FrameworkAdapterWSO2.java:634) ... 23 more But when I run df -a I can see that the partition still has plenty of space available: [root@stzsi466 wso2]# df -a Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/vg_stzsi466-lv_root 12054824 2116092 9326380 19% / proc 0 0 0 - /proc sysfs 0 0 0 - /sys devpts 0 0 0 - /dev/pts tmpfs 4030764 0 4030764 0% /dev/shm /dev/sda1 495844 53858 416386 12% /boot /dev/sdb1 51605436 1424288 47559744 3% /opt/wso2 none 0 0 0 - /proc/sys/fs/binfmt_misc [root@stzsi466 ~]# df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/vg_stzsi466-lv_root 765536 45181 720355 6% / tmpfs 1007691 1 1007690 1% /dev/shm /dev/sda1 128016 44 127972 1% /boot /dev/sdb1 3276800 6137 3270663 1% /opt/wso2 What is the problem here? Is it caused by the Java on CentOS 6.4? I have another server running Redhat REHL 6.4 and all works fine - same Java etc. Does anyone know of this problem?

    Read the article

  • Apache serving empty gzip with assets produced by Rails Asset Pipeline

    - by PizzaPill
    I followed the steps described on the blogpost The Asset Pipeline, from development to production and tweaked them to my environment. The two important files are: /etc/apache/site-available/example.com <VirtualHost *:80> ServerName example.com ServerAlias www.example.com DocumentRoot "/var/www/sites/example.com/current/public" ErrorLog "/var/log/apache2/example.com-error_log" CustomLog "/var/log/apache2/example.com-access_log" common <Directory "/var/www/sites/example.com/current/public"> Options All AllowOverride All Order allow,deny Allow from all </Directory> <Directory "/var/www/sites/example.com/current/public/assets"> AllowOverride All </Directory> <LocationMatch "^/assets/.*$"> Header unset Last-Modified Header unset ETag FileETag none ExpiresActive On ExpiresDefault "access plus 1 year" </LocationMatch> RewriteEngine On # Remove the www RewriteCond %{HTTP_HOST} ^www.example.com$ [NC] RewriteRule ^(.*)$ http://example.com/$1 [R=301,L] </VirtualHost> /var/www/sites/example.com/shared/assets/.htaccess RewriteEngine on RewriteCond %{HTTP:Accept-Encoding} \b(x-)?gzip\b RewriteCond %{REQUEST_FILENAME}.gz -s RewriteRule ^(.+) $1.gz [L] <FilesMatch \.css\.gz$> ForceType text/css Header set Content-Encoding gzip </FilesMatch> <FilesMatch \.js\.gz$> ForceType text/javascript Header set Content-Encoding gzip </FilesMatch> But apache seems to send empty gzip files because the testsite looses all styles and firebug doesnt find any content for the css files. Altough if I call the assets-path directly I get some gibberish that looks like binary data. If I move the htaccess-file everything is back to normal. How could I find out where/what went wrong or do you have any suggestions what error I made? > apache2 -v System: Server version: Apache/2.2.14 (Ubuntu) Server built: Mar 5 2012 16:42:17 > uname -a Linux node0 2.6.18-028stab094.3 #1 SMP Thu Sep 22 12:47:37 MSD 2011 x86_64 GNU/Linux

    Read the article

  • Ubuntu 12.04/12.10 can't detect windows or any other partitions(Asus z77 UEFI BIOS)

    - by user971155
    I've recently completed tinkering my new pc(motherboard ASUS z77 with UEFI BIOS) and unfortunately not everything works quite well. After installing windows 7 ultimate on a single primary partition(SATA drive) I decided to allocate one more logical partition for additional needs. When I tried doing it with the manager - it said that it couldn't allocate requested size even though I certainly asked for much less than it was available. I thought that it might have been a windows issue and proceded to installing Ubuntu 12.10 x64. When the graphical interface loaded it showed me a message stating that it can't find any other operating system on the drive. When I used custom partioning option it showed me none of my current partions(including that with windows). However, when I boot with "Try Ubuntu" feature it does find them ! I find it weird though. Here's what the console present me with: ubuntu@ubuntu:~$ sudo os-prober /dev/sda1:Windows 7 (loader):Windows:chain ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sda: 640.1 GB, 640135028736 bytes 255 heads, 63 sectors/track, 77825 cylinders, total 1250263728 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00072b98 Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 100020223 49906688 7 HPFS/NTFS/exFAT /dev/sda3 100022270 1250263039 575120385 5 Extended /dev/sda4 566669312 1250263039 341796864 83 Linux I also tried creating partitions from disk utility which results in error: , Error creating partition: helper exited with exit code 1: In part_add_partition: device_file=/dev/sda, start=51211402240, size=1923000000, type=0x83 Entering MS-DOS parser (offset=0, size=640135028736) MSDOS_MAGIC found looking at part 0 (offset 1048576, size 104857600, type 0x07) new part entry looking at part 1 (offset 105906176, size 51104448512, type 0x07) new part entry looking at part 2 (offset 51211402240, size 588923274240, type 0x05) Entering MS-DOS extended parser (offset=51211402240, size=588923274240) readfrom = 51211402240 MSDOS_MAGIC found Exiting MS-DOS extended parser looking at part 3 (offset 290134687744, size 349999988736, type 0x83) new part entry Exiting MS-DOS parser MSDOS partition table detected containing partition table scheme = 1 got it Error: Can't have overlapping partitions. ped_disk_new() failed Here's what I get when I try to install the system i.stack.imgur.com/pjlb9.png, i.stack.imgur.com/g1lXN.png P.S. It's strange that I even can't create any more partitions neither with disk-utility nor with windows 7 native tools

    Read the article

  • ubuntu dmidecode is not functioning properly

    - by Alaa Alomari
    dmidecode is giving irrelevant and conflicted results. it shows that i have two slots while the correct is 8 (the board is Tyan S5350.) uname -a Linux synd01 3.0.0-16-server #29-Ubuntu SMP Tue Feb 14 13:08:12 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux root@synd01:/home/badmin# dmidecode -t 16 dmidecode 2.9 SMBIOS 2.33 present. Handle 0x0011, DMI type 16, 15 bytes Physical Memory Array Location: System Board Or Motherboard Use: System Memory Error Correction Type: None Maximum Capacity: 4 GB Error Information Handle: Not Provided Number Of Devices: 2 while root@synd01:/home/badmin# dmidecode -t 17 | grep Size Size: No Module Installed Size: No Module Installed Size: 1024 MB Size: 1024 MB Size: No Module Installed Size: No Module Installed Size: 1024 MB Size: 1024 MB also lshw shows: *-memory description: System Memory physical id: 11 slot: System board or motherboard size: 4GiB *-bank:0 description: DIMM DDR Synchronous 166 MHz (6.0 ns) [empty] physical id: 0 slot: J3B1 clock: 166MHz (6.0ns) *-bank:1 description: DIMM DDR Synchronous 166 MHz (6.0 ns) [empty] physical id: 1 slot: J3B3 clock: 166MHz (6.0ns) *-bank:2 description: DIMM DDR Synchronous 166 MHz (6.0 ns) physical id: 2 slot: J2B2 size: 1GiB width: 64 bits clock: 166MHz (6.0ns) *-bank:3 description: DIMM DDR Synchronous 166 MHz (6.0 ns) physical id: 3 slot: J2B4 size: 1GiB width: 64 bits clock: 166MHz (6.0ns) *-bank:4 description: DIMM DDR Synchronous 166 MHz (6.0 ns) [empty] physical id: 4 slot: J3B2 clock: 166MHz (6.0ns) *-bank:5 description: DIMM DDR Synchronous 166 MHz (6.0 ns) [empty] physical id: 5 slot: J2B1 clock: 166MHz (6.0ns) *-bank:6 description: DIMM DDR Synchronous 166 MHz (6.0 ns) physical id: 6 slot: J2B3 size: 1GiB width: 64 bits clock: 166MHz (6.0ns) *-bank:7 description: DIMM DDR Synchronous 166 MHz (6.0 ns) physical id: 7 slot: J1B1 size: 1GiB width: 64 bits clock: 166MHz (6.0ns) what might cause this conflict and how can i fix it? Thanks

    Read the article

  • Apache Name-based Virtual Hosts - configuring httpd.conf file

    - by Dave
    Hi there. I am running a web app on Tomcat at the following location on my server. /var/tomcat/webapps/SoccerApp I am looking to update the Tomcat httpd.conf file with the following virtual host... <VirtualHost *:80> DocumentRoot /var/tomcat/webapps/SoccerApp/MyTeam ServerName www.mysoccerapp.com </VirtualHost> This gives me a 404 error as the directory MyTeam does not exist. However my application behaves in a way that it uses this URL directory as the name of the soccer team for which to display data, so it will never be a physical folder on the server. None the less, I would like www.mysoccerapp.com to resolve to webapps/SoccerApp/MyTeam, even though the directory isnt there. does this make any sense? Any ideas on how to get this working. At the end of the day, i want to do the following... www.teamone.com -> runs /webapps/SoccerApp/TeamOne www.teamone.com -> runs /webapps/SoccerApp/TeamTwo ...where TeamOne and TeamTwo are not physical directories, but merely processed by my SoccerApp application as the current soccer team to display data for. Many many thanks! Dave.

    Read the article

  • Is this "cache administrator" error my server's problem?

    - by Eoin
    Hey, I have a CentOS VPS running Apache with a phpBB installation. One specific user has received errors when posting a message or logging in to the forum. The following issue has arisen in parallel to installing nginx, which serves only the static files of my site. Not sure if this is only coincidence. Furthermore, my setup uses redirects (in some cases, double-redirects) to point the user to a different virtual folder. So, the forum is seen to be at /translation/ but the actual files are found in /phpbb/. I'm at a loss as to what may be the underlying issue. My server? The person's ISP? She has tested both at home and at work, with similar issues. While trying to process the request: GET /phpbb/index.php?sid=f62c927e7eb8f1d60a92dcc6fd918112 HTTP/1.1 Host: www.irishgaelictranslator.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-za Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://www.irishgaelictranslator.com/phpbb/ucp.php?mode=login Cookie: phpbb3_cipi4_u=96645; phpbb3_cipi4_k=; phpbb3_cipi4_sid=f62c927e7eb8f1d60a92dcc6fd918112; __utma=153470688.1232378553.1294664234.1294664234.1294664234.1; __utmb=153470688.9.10.1294664234; __utmc=153470688; __utmz=153470688.1294664235.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); style_cookie=null The following error was encountered: Invalid Response The HTTP Response message received from the contacted server could not be understood or was otherwise malformed. Please contact the site operator. Your cache administrator may be able to provide you with more details about the exact nature of the problem if needed.

    Read the article

  • SSL connection hangs as client hello (curl, openssl client, apt-get, wget, everything)

    - by Niklas B
    Hi, I've run into a problem on my Debian VPS (a xen domU) regarding SSL. Namely almost all SSL connections hangs at client hello. For example: # curl -vI https://graph.facebook.com About to connect() to graph.facebook.com port 443 (#0) Trying 66.220.146.48... connected Connected to graph.facebook.com (66.220.146.48) port 443 (#0) successfully set certificate verify locations: CAfile: none CApath: /etc/ssl/certs SSLv3, TLS handshake, Client hello (1): It's the same when using the openssl client. However, some of the SSL traffic works (for example https://www.nordea.se). Server #uname -a Linux server.com 2.6.26-1-xen-amd64 #1 SMP Fri Mar 13 21:39:38 UTC 2009 x86_64 GNU/Linux It does however work on my Dom 0 (the main xen host). Apt-get I can't even run apt-get update with the debian security sources (hangs on reading headers) Open SSL At the begining I thought I had an old openssl client (0.9.8o-4) since I appeared to have a newer on the Dom 0 (0.9.8g-15+lenny8) but doing a manuanl update on the openssl deb didn't help. Open SSL Client This is the full output of when the openssl client hangs: http://pastebin.com/PAjwMap9 Closing thoughts I've Googled the crap out of this, and I'm not getting any further. I've seen problems with curl, apt-get etc. but they are all specific relating to the very application - not general for the system. Any thoughts?

    Read the article

  • How to move your Windows User Profile to another drive in Windows 8

    - by Mark
    I like to have my user folder on a different drive (D:) than my OS is (C:). Reading the following post I decided to give it a try. All went quite well, untill I found out that my Windows 8 Apps won't execute anymore (other than that I didn't noticed any problems). My apps do work, while using an account that isn't moved. In the eventviewer I've found error messages like these: App <Microsoft.MicrosoftSkyDrive> crashed with an unhandled Javascript exception. App details are as follows: Display Name:<SkyDrive>, AppUserModelId: <microsoft.microsoftskydrive_8wekyb3d8bbwe!Microsoft.MicrosoftSkyDrive> Package Identity:<microsoft.microsoftskydrive_16.4.4204.712_x64__8wekyb3d8bbwe> PID:<4452>. The details of the JavaScript exception are as follows Exception Name:<WinRT error>, Description:<Loading the state store failed. > , HTML Document Path:</modernskydrive/product/skydrive/App.html>, Source File Name:<ms-appx://microsoft.microsoftskydrive/jx/jx.js>, Source Line Number:<1>, Source Column Number:<27246>, and Stack Trace: ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:27246 localSettings() ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:51544 _initSettings() ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:54710 getApplicationStatus(boolean) ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:48180 init(object) ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:45583 Application(number, boolean) ms-appx://microsoft.microsoftskydrive/modernskydrive/product/skydrive/App.html:216:13 Anonymous function(object) Using ProcMon, I see a lot of access denied messages, like these: Date & Time: 12-9-2012 9:32:20 Event Class: File System Operation: CreateFile Result: ACCESS DENIED Path: D:\Users\John\AppData\Local\Packages\microsoft.microsoftskydrive_8wekyb3d8bbwe\Settings\settings.dat TID: 2520 Duration: 0.0000149 Desired Access: Read Data/List Directory, Write Data/Add File, Read Control Disposition: OpenIf Options: Sequential Access, Synchronous IO Non-Alert, No Compression Attributes: N ShareMode: None AllocationSize: 0 Any idea how to solve this? I noticed that the app folders e.g.: D:\Users\john\AppData\Local\Packages\microsoft.microsoftskydrive_8wekyb3d8bbwe had a different owner than the old profile folder had. Old profile folder had john as owner where my new profile folder had the Administrators group as owner. Changing this didn't help unfortunately.

    Read the article

  • OpenVZ with bridged interfaces and VLAN

    - by Deimosfr
    Hi, I've got a problem with OpenVZ with bridged VLAN. Here is my configuration: +------+ +-------+ +-----------+ +---------+ br0 |VE101 | | | | OpenBSD |----->| Debian |------->| | | WAN |--->| Router | | OpenVZ | +------+ | | | Firewall |----->| br0 br1 | br1 +------+ +-------+ +-----------+ +---------+------->|VE102 | |br0 | | |VLAN br0.110 +------+ v +---------+ |VE103.110| +---------+ I can't make VLAN work on br0 (br0.110) and I would like to understand why. I don't have any switch so no problem with unmanageable switch. I've configured a VLAN interface on OpenBSD in /etc/hostname.vlan110: inet 192.168.110.254 255.255.255.0 NONE vlan 110 vlandev sis1 And it seems to be working fine. I've also adapted my PF configuration to work with VLAN but I don't see any incoming traffic. On my Debian Lenny, here is my interfaces configuration : # The loopback network interface auto lo iface lo inet loopback # br0 auto br0 iface br0 inet static address 192.168.100.1 netmask 255.255.255.0 gateway 192.168.100.254 network 192.168.100.0 broadcast 192.168.100.255 bridge_ports eth0 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp off # VLAN 110 auto br0.110 iface br0.110 inet static address 192.168.110.1 netmask 255.255.255.0 network 192.168.110.0 gateway 192.168.110.254 broadcast 192.168.110.255 pre-up vconfig add br0 110 post-down vconfig rem br0.110 It looks OK, but when I start my VE, here is the message: ... Configure veth devices: veth103.0 Adding interface veth103.0 to bridge br0.110 on CT0 for VE103 can't add veth103.0 to bridge br0.110: Operation not supported VE start in progress... So I've got one error here. I've followed this documentation http://wiki.openvz.org/VLAN but it doesn't work. I've certainly missed something but I don't know why. Someone could help me please? Thanks

    Read the article

  • Windows 7 boots to black screen with blinking cursor

    - by murgatroid99
    I have an Alienware M17x that dual boots into Ubuntu 11.04 and Windows 7 Home Premium. Currently, the computer starts at the GRUB loader and will boot into Ubuntu, but if I try to boot into Windows, I immediately get a black screen with a blinking cursor in the upper left corner. The output of fdisk -l is Device Boot Start End Blocks Id System /dev/dm-0p1 1 5 40131 de Dell Utility Partition 1 does not start on physical sector boundary. /dev/dm-0p2 6 1918 15360000 7 HPFS/NTFS Partition 2 does not start on physical sector boundary. /dev/dm-0p3 * 1918 64772 504878877+ 7 HPFS/NTFS Partition 3 does not start on physical sector boundary. /dev/dm-0p4 64772 77827 104858625 5 Extended Partition 4 does not start on physical sector boundary. /dev/dm-0p5 64772 67204 19531008 83 Linux /dev/dm-0p6 67204 74498 58593536 83 Linux /dev/dm-0p7 74498 77577 24731648 83 Linux /dev/dm-0p8 77578 77827 2000128 82 Linux swap / Solaris I have used the Windows rescue CD, and run the automatic error fixer until it finds no errors. I have run chkdsk /R on both the main Windows 7 (/dev/dm-0p3) partition and the recovery partition (/dev/dm-0p2). I set the main Windows 7 partition to be active. I also tried running in the recovery console the commands bootrec /fixmbr bootrec /fixboot bootrec /rebuildbcd None of these helped and the last set of commands deletes grub, which I then have to reinstall from Ubuntu. I think the last thing I did in windows before this started was install the newest ATI driver for my video card. This would suggest using system restore, and I actually had a restore point earlier (after the problem started), but after whatever I did that restore point does not appear in the list on the recovery disk any more, so I cannot do a system restore. Is there anything else I can try to make Windows boot properly again? Edit: Running the suggested commands bootsect /nt60 c: bcdboot c:\windows /s c: was also ineffective.

    Read the article

  • Missing whole disk device in OpenSolaris

    - by Jeff Mc
    I have begun experimenting with Solaris and ZFS as a NAS. All was going very smoothly until I had a drive failure. When I replaced the drive, I no longer have a device file mapped to the whole disk. /dev/dsk/c7t3d0 does not exist but c7t2d0 and c7t4d0 both do. Also the sd@3,0:wd file under the /devices/ tree is non-existent. Do I have to prepare/partition the disk somehow to cause the whole disk device to exist? Here are a few outputs that might be useful. jeffmc@ats-ds2:/dev/dsk$ zpool status pool: datapool state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-2Q scrub: none requested config: NAME STATE READ WRITE CKSUM datapool DEGRADED 0 0 0 mirror-0 DEGRADED 0 0 0 c7t2d0 ONLINE 0 0 0 c7t3d0 UNAVAIL 0 0 0 cannot open mirror-1 ONLINE 0 0 0 c7t4d0 ONLINE 0 0 0 c7t5d0 ONLINE 0 0 0 jeffmc@ats-ds2:/dev/dsk$ zpool replace datapool c7t3d0 cannot open 'c7t3d0': no such device in /dev/dsk must be a full path or shorthand device name jeffmc@ats-ds2:/dev/dsk$ sudo format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c7t0d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@0,0 1. c7t1d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@1,0 2. c7t2d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@2,0 3. c7t3d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@3,0 4. c7t4d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@4,0 5. c7t5d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@5,0

    Read the article

  • Should I partition my main table with 2 millions rows?

    - by domribaut
    Hi, I am a developer and would need some DBA-advices. We are starting to get performance problem with a MSSQL2005 database. The visible effects of the incidents is mainly CPU-hog on the server but operations reported that it was also draining resources from the SAN (not always). the main source of issues is for sure in some application but I am wondering if we should partition some of the main tables anyway in order to relax the I/O pressure. The base is about 60GB in one file. The main table (order) has 2.1 Million rows with a 215 colones (but none is huge). We have an integer as PK so it should be OK to define a partition function. Will we win something with partitioning? will partition indexes buy us something? Here are some more facts about the DB and the table database_name database_size unallocated space My_base 57173.06 MB 79.74 MB reserved data index_size unused 29 444 808 KB 26 577 320 KB 2 845 232 KB 22 256 KB name rows reserved data index_size unused Order 2 097 626 4 403 832 KB 2 756 064 KB 1 646 080 KB 1688 KB Thanks for any advice Dom

    Read the article

  • Hyper-V vss-writer not making current copies

    - by Martinnj
    I'm using diskshadow to backup live Hyper-V machines on a Windows 2008 server. The backup consists of 3 scripts, the first will create the shadow copies and expose them, the second uses robocopy to copy them to a remote location and the third unexposes the shadow copies again. The first script – the one that runs correctly but fails to do what it's supposed to: # DiskShadow script file to backup VM from a Hyper-V host # First, delete any shadow copies of the drives. System Drives needs to be included. Delete Shadows volume C: Delete Shadows volume D: Delete Shadows volume E: #Ensure that shadow copies will persist after DiskShadow has run set context persistent # make sure the path already exists set verbose on begin backup add volume D: alias VirtualDisk add volume C: alias SystemDrive # verify the "Microsoft Hyper-V VSS Writer" writer will be included in the snapshot # NOTE: The writer GUID is exclusive for this install/machine, must be changed on other machines! writer verify {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} create end backup # Backup is exposed as drive X: make sure your drive letter X is not in use Expose %VirtualDisk% X: Exit The next is just a robocopy and then an unexpose. Now, when I run the above script, I get no errors from it, except that the "BITS" writer has been excluded because none of its components are included. That's okay because I really only need the Hyper-V writer. Also I double checked the GUID for the writer, it's correct. During the time when the Hyper-V writer becomes active, 2 things will happen on the guest machines: The Debian/Linux machine will go to a saved state and restore when done, all fine. The Windows guests will "creating vss snapshop-sets" or something similar. Then X: gets exposed and I can copy the .vhd files over. The problem is, for some reason, the VHD files I get over seems to be old copies, they miss files, users and updates that are on the actual machines. I also tried putting the machines in a saved sate manually, didn't change the outcome. I hope someone here has an idea of how to solve this.

    Read the article

  • Unable to sync iPod Touch with my PC

    - by alex
    I'm trying to sync a first gen iPod Touch to my PC running Windows 7 64 bit. The problem is that whenever I connect the iPod, iTunes completely freezes (if I start iTunes after connecting the iPod it will simply hang until it's physically disconnected from the PC). I reinstalled iTunes thinking that it had been corrupted, but without any luck. I've had this problem with all the latest versions of iTunes. I've also tried using MediaMonkey and DoubleTwist. None of these apps see the iPod as being connected; DoubleTwist also freezes, just like iTunes. The really strange thing is that I was able to sync the iPod with this PC a while back, but I now seem to have lost that ability. I don't know what changed. Windows detects the device every time it's plugged in (I can see it in Device Manager and I can browse all photos on iPod as if it were a camera). Also, I can sync it to iTunes on Mac OS X without any major problems.

    Read the article

  • How do I reinitialise a failed RAID 5 drive using terminal on Ubuntu Server

    - by Stephen
    I've currently put together a new system and part of that has been creating a software RAID 5 using 'mdadm' in Ubuntu Server. I successfully got to the point where I create the array using: sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 I left it to do its thing overnight then used the following command to check on it: watch cat /proc/mdstat To which the following was returned: Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdd1[4](S) sdc1[2] sdb1[1] sda1[0](F) 5860535808 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/2] [_UU_] unused devices: <none> It appears that one has failed (and I'm not too savvy with why another is a spare). So, just to be sure that something else isn't amiss I wanted to try and re-engage the failed drive. Can someone explain how I can do that and what I should do with the spare (if anything). And also how do I know when synchronisation is complete? The tutorial I used to get this far is located here: http://sonniesedge.co.uk/2009/06/13/software-raid-5-on-ubuntu-904/ Many thanks! p.s. Here is some extra information that may help: sudo mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Mon Jun 18 21:14:21 2012 Raid Level : raid5 Array Size : 5860535808 (5589.04 GiB 6001.19 GB) Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon Jun 18 21:50:26 2012 State : clean, FAILED Active Devices : 2 Working Devices : 3 Failed Devices : 1 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : myraidbox:0 (local to host myraidbox) UUID : a269ee94:a161600c:fb1665e7:bd2f27b3 Events : 13 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 0 0 3 removed 0 8 1 - faulty spare /dev/sda1 4 8 49 - spare /dev/sdd1

    Read the article

  • Standard PHP configuration command for centos 6.5

    - by Krishna
    First of all let me tell you my knowledge in managing server and configuring script is very basic. Now I am trying to setup a server for my personal use. My server is up and running in Centos 6.5 with PHP, MySql and ISPConfig. I felt that even PHP is installed it needs some configuration for a standard web server. I searched on and found some 'configure' command but none working. The reason they did not have a specific step by step guidance on how to do this. Can somebody help me to configure PHP in a standard way? Thanks in advance. Some more detail, as required - 1. I did not install PHP from source. 2. Yes, I want some standard modules to be enabled in PHP 3. I also want to know do I need to change some settings on default? I do have a managed VPS, so do you suggest me to copy php.ini from that VPS to new dedicated server so all required settings which is working fine for me to be copied here also? Thanks

    Read the article

  • How can I prevent Apache from asking for credentials on non SSL site

    - by Scott
    I have a web server with several virtual hosts. Some of those hosts have an associated ssl site. I have a DirectoryMatch directive in my main config file which requires basic authentication to any directory with secured as part of the directory path. On sites that have an SSL site, I have a rewrite rule (located in the non ssl config for that site), that redirects to the SSL site, same uri. The problem is the http (80) site asks for credentials first, and then the https (443) site asks for credentials again. I would like to prevent the http site from asking and thus avoid the potential for someone entering credentials and having them sent in clear text. I know I could move the DirectoryMatch down to the specific site, and just put the auth statement in the SSL config, but that would introduce the possibility of forgetting to protect critical directories when creating new sites. Here are the pertinent declarations: httpd.conf (all sites): <DirectoryMatch "_secured_"> AuthType Basic AuthName "+ + + Restrcted Area on Server + + +" AuthUserFile /home/websvr/.auth/std.auth Require valid-user </DirectoryMatch> site.conf (specific to individual site) <DirectoryMatch "_secured_"> RewriteEngine On RewriteRule .*(_secured_.*) https://site.com/$1 </DirectoryMatch> Is there a way to leave DirectoryMatch in the main config file and prevent the request for authorization from the http site? Running Apache 2 on Ubuntu 10.04 server from the default package. I have AllowOverride set to none - I prefer to handle things in the config files instead of .htaccess.

    Read the article

< Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >