Search Results

Search found 15931 results on 638 pages for 'password storage'.

Page 376/638 | < Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >

  • Forgot the username

    - by prithviraj
    Hello all I have fedora installed in my system. I know the password but i forgot the user name. I can access through terminal but i don't no how to login through gui. Please help me. Thanks in advance.

    Read the article

  • Eclipse CVS repository explorer

    - by Shadi
    Hi, when I want to start CVS repository explorer, it cannot connect to the server. I entered something like "a.b.com" as "host" and "/c/d" as repository path. I entered my username and password and "module" name correctly. Does anybody know what the problem is? Thank you so much. shadi :)

    Read the article

  • NFS-Root not working when booting over PXE

    - by Randy
    I am desperately trying to get a diskless client running over PXE-Boot using a NFS-Share as a root file system. I did this before some years ago but for some reason I am stucked at this since days. The TFTP-Server itself is running fine and booting a netinstaller works also fine. The kernel and initrd are loaded also but the bootprocess stops with this (screenshot) kernel panic. I'm using the squeeze standard i386-Kernel and I have prepared the initrd with this config: MODULES=most BUSYBOX=y KEYMAP=n COMPRESS=gzip BOOT=nfs DEVICE= NFSROOT=auto I also tried MODULES=netboot with the same outcome. My PXE-configuration looks like this: LABEL linux KERNEL diskless/debian-default/vmlinuz-2.6.32-5-686 APPEND root=/dev/nfs initrd=diskless/debian-default/vmlinuz-2.6.32-5-686 nfsroot=192.168.140.2:/storage/nfs-boot-images/default-squeeze ip=dhcp rw Furthermore I have captured the network communication of the client via tcpdump and learned that the client isn't even trying to connect to the NFS-share. Does anybody has got an idea what is going wrong here?

    Read the article

  • How to change default permission for uploaded files in apache with mounted webroot?

    - by faridv
    I have an ubuntu server 11.10 with apache 2.2.20, php 5.3.6 and an installation of Joomla cms. I have used an extra hard disk as my web server storage and mounted it into /data/www/ (I hope it's not where my problem us!). I've set permission of all files and folders in my web root to 755 and user groups for them is set to [default ubuntu user(in my case radio)]:www-data. In past days I had serious problems with joomla not showing new uploaded images and other files and also I can't install any extensions. After hours of searching I found out that uploaded files don't have appropriate permission (they are -rw-------) and Joomla application cannot read, copy or move them after upload. I’m wondering how can I set a default permission so all files that I upload use it? PS: I’ve tested umask but it did nothing. I think it has nothing to do with my problem.

    Read the article

  • ASP.NET access a folder as ASPNET even though impersonation is set

    - by Ron Harlev
    I have my ASP.NET web.config set with impersonation <identity impersonate="true" userName="domainName\userName" password="userPassword" /> I'm running some a method like IO.Directory.GetFiles(somePath) And monitoring the file system access with Process Monitor I keep getting all the access requests from the aspnet_wp.exe process to the folder, as the ASPNET user. Why am I not seeing the access as the impersonated user?

    Read the article

  • silverlight authentication

    - by user291400
    Good day! I have an silverlight site (silverlight navigation application) and I want clients to log in on my site. I want to give them different rights of viewing pages. A WCF service gives me true or false when I enter a login and a password. Then, if it returns true, I want to remember the logged user. How can I do it? Using cookies or global variable or something else?

    Read the article

  • rsync to windows (cygwin)

    - by abergmeier
    We have a windows file storage (don't ask) and now I want to rsync with the machine from Windows, Mac and Linux. So I installed freeSSHd (login shell is set to C:/cygwin64/bin/sh.exe), set up certificates and testing from Linux the test.dat has 0 bytes: ssh myuser@winmachinename "C:/cygwin64/bin/true.exe" > test.dat Even double checking with actual output works fine: ssh myuser@winmachinename "C:/cygwin64/bin/ls.exe" > test.dat Now, when I call rsync: rsync --progress -avz -e ssh myuser@winmachinename:/c/Users ~/test it fails with: protocol version mismatch -- is your shell clean? (see the rsync man page for an explanation) rsync error: protocol incompatibility (code 2) at compat.c(174) [Receiver=3.1.0] As far as reading the docs, this should not happen, when the first test is successful!? I am by now out of ideas - any recommendations how to debug this? EDIT: | OS | rsync version | |:--------------|:------------------------------------------| | Windows | rsync version 3.0.9 protocol version 30 | | Linux | rsync version 3.1.0 protocol version 31 |

    Read the article

  • soap client not working in php

    - by Jin Yong
    I tried to write a code in php to call a web server to add a client details, however, it's seem not working for me and display the following error: Fatal error: Uncaught SoapFault exception: [HTTP] Could not connect to host in D:\www\web_server.php:15 Stack trace: #0 [internal function]: SoapClient-_doRequest('_call('AddClient', Array) #2 D:\www\web_server.php(15): SoapClient-AddClientArray) #3 {main} thrown in D:\www\web_server.php on line 15 Refer below for the code that I wrote in php: <s:element name="AddClient"> - <s:complexType> - <s:sequence> <s:element minOccurs="0" maxOccurs="1" name="username" type="s:string"/> <s:element minOccurs="0" maxOccurs="1" name="password" type="s:string"/> <s:element minOccurs="0" maxOccurs="1" name="clientRequest" type="tns:ClientRequest"/> </s:sequence> </s:complexType> </s:element> - <s:complexType name="ClientRequest"> - <s:sequence> <s:element minOccurs="0" maxOccurs="1" name="customerCode" type="s:string"/> <s:element minOccurs="0" maxOccurs="1" name="customerFullName" type="s:string"/> <s:element minOccurs="0" maxOccurs="1" name="ref" type="s:string"/> <s:element minOccurs="0" maxOccurs="1" name="phoneNumber" type="s:string"/> <s:element minOccurs="0" maxOccurs="1" name="Date" type="s:string"/> </s:sequence> </s:complexType> <s:element name="AddClientResponse"> - <s:complexType> - <s:sequence> <s:element minOccurs="0" maxOccurs="1" name="AddClientResult" type="tns:clientResponse"/> <s:element minOccurs="0" maxOccurs="1" name="response" type="tns:ServiceResponse"/> </s:sequence> </s:complexType> </s:element> - <s:complexType name="ClientResponse"> - <s:sequence> <s:element minOccurs="0" maxOccurs="1" name="testNumber" type="s:string"/> </s:sequence> </s:complexType> <?php $client = new SoapClient($url); $result = $client->AddClient(array('username' => 'test','password'=>'testing','clientRequest'=>array('customerCode'=>'18743','customerFullName'=>'Gaby Smith','ref'=>'','phoneNumber'=>'0413496525','Date'=>'12/04/2013'))); echo $result->AddClientResponse; ?> Does anyone where I gone wrong for this code?

    Read the article

  • Simple copy to pen-drive - 0x80070057

    - by yzraeu
    Hello guys, I have this problem for a while and still didn't find the answer. I'm copying a specifc 10mb file to my pen-drive, from any folder on PC to any folder on the pen-drive and all i get is this: 0x80070057 The parameter is incorrect I simply cannot copy the file at all!! The pen-drive in case is my Nokia 5800, in "Mass Storage" mode. Sometimes I cannot copy a single MP3 file, 5 or 7mb. So i have to disconnect and connect again. The source file is not corrupted, the destination works fine with other files. It's just with some files. If I change to another pen-drive, works fine.

    Read the article

  • Skipping hardlinks when using TSM Backup

    - by Lars Haugseth
    We need to backup a filesystem with lots of hardlinks. Since there are several hardlinks for each "true" file, we would like to skip all the hardlinks when backing up the filesystem to avoid n exact copies of each file. The backup is done using Tivoli Storage Manager Backup, and we've been unable to get it to treat hardlinks as anything other than separate files to be backed up alongside each other. In case it's relevant for possible solutions, I'd like to note that it's possible to tell a hardlink from a proper file by the filename: foobarbaz-123.ext # file foobarbaz-123-1.ext # hardlink foobarbaz-123-2.ext # hardlink barbazfoo-456.ext # file barbazfoo-456-1.ext # hardlink barbazfoo-456-2.ext # hardlink barbazfoo-456-3.ext # hardlink That is, all hardlinks have two hyphens in the filename, where as proper files have just the one. The server is running Ubuntu Linux, and the files are situated on a gfs volume on our SAN.

    Read the article

  • HP ProLiant Smart Array "lock up" code 0x11

    - by ewwhite
    I've a ProLiant DL580 G7 server that experienced a storage subsystem failure during production. The system appeared available and responded to pings, but all I/O access stalled (the system load must have been 100+). The ASR did not trigger at the specified watchdog timeout. I had to force a reboot from the ILO. During POST, I received the following error: A controller failure event occurred prior to this power-up. (Previous lock up code = 0x11) I haven't pulled the ADU report yet, but I'm curious as to what this error actually means. I was not responsible for the the installation, but can see that the firmware is very old. But if there's anything else I should know about the error, I'd like to know for the post-mortem report. edit - I should add that the server had 95 days of uptime prior to the lock up.

    Read the article

  • kerberos ENC-TC

    - by alex-river
    What is wrong with the heimdal configuration? kinit test test@REALM's Password: kinit: krb5_get_init_creds: No ENC-TS found An /etc/krb5.conf contains: default_tgs_enctypes = des-cbc-crc default_tkt_enctypes = des-cbc-crc default_etypes = des-cbc-crc default_etypes_des = des-cbc-crc fcc-mit-ticketflags = true

    Read the article

  • Delete manytomanyfield in Django

    - by Mike
    I have the following models class Database(models.Model): user = models.ForeignKey(User) name = models.CharField(max_length=100) created = models.DateTimeField(auto_now_add=True) updated = models.DateTimeField(auto_now=True) class DatabaseUser(models.Model): user = models.ForeignKey(User) name = models.CharField(max_length=100) password = models.CharField(max_length=100) database = models.ManyToManyField(Database) created = models.DateTimeField(auto_now_add=True) updated = models.DateTimeField(auto_now=True) One DatabaseUser can have many Databases under it's control. The issue I have if I go to delete a Database it wants to Delete the DatabaseUser also.. Is there a way to stop this from happening easily?

    Read the article

  • Western Digital My Book World drops off network

    - by Macha
    Most of my storage in my house relies on a WD My Book World Edition 500GB network drive. I threw out the vendor crapware they give you to access it (a trial version of Mionet) after it starting nagging me to upgrade, and set it up as a standard network drive using Window's Map Network Drive. However, since then, it has been dropping off the network after 30 minutes of non-usage. The only way to get it back on is to switch it off and on again at the plug socket. Does anyone know what is causing this, and hopefully how to fix it? EDIT: it's the original "blue rings" version with the latest firmware.

    Read the article

  • Which way should we choose to shorten backup time?

    - by facebook-100005613813158
    A company performs a full backup for its data in a daily basis for disaster recovery purposes. However, their backup process cannot be completed within the assigned backup time window. What would you recommend to this company about how to restructure its backup environment in order to minimize the backup time? We got 4 candidates, 1. Perform LAN based backup 2. Weekly full backup and daily incremental 3. Weekly full backup and daily cumulative 4. Add more ISL to increase bandwidth when comparing incremental backup with cumulative backup ,incremental backup time is surely shorter than cumulative backup time .But I don's know adding more ISL is allowed in an existing storage system,or can this operation really shorten backup time ?

    Read the article

  • CakePHP Test Fixtures Drop My Tables Permanently After Running A Test Case

    - by Frank
    I'm not sure what I've done wrong in my CakePHP unit test configuration. Every time I run a test case, the model tables associated with my fixtures are missing form my test database. After running an individual test case I have to re-import my database tables using phpMyAdmin. Here are the relevant files: This is the class I'm trying to test comment.php. This table is dropped after the test. App::import('Sanitize'); class Comment extends AppModel{ public $name = 'Comment'; public $actsAs = array('Tree'); public $belongsTo = array('User' => array('fields'=>array('id', 'username'))); public $validate = array( 'text' = array( 'rule' =array('between', 1, 4000), 'required' ='true', 'allowEmpty'='false', 'message' = "You can't leave your comment text empty!") ); database.php class DATABASE_CONFIG { var $default = array( 'driver' = 'mysql', 'persistent' = false, 'host' = 'project.db', 'login' = 'projectman', 'password' = 'projectpassword', 'database' = 'projectdb', 'prefix' = '' ); var $test = array( 'driver' = 'mysql', 'persistent' = false, 'host' = 'project.db', 'login' = 'projectman', 'password' = 'projectpassword', 'database' = 'testprojectdb', 'prefix' = '' ); } My comment.test.php file. This is the table that keeps getting dropped. <?php App::import('Model', 'Comment'); class CommentTestCase extends CakeTestCase { public $fixtures = array('app.comment', 'app.user'); function start(){ $this-Comment =& ClassRegistry::init('Comment'); $this-Comment-useDbConfig = 'test_suite'; } This is my comment_fixture.php class: <?php class CommentFixture extends CakeTestFixture { var $name = "Comment"; var $import = 'Comment'; } And just in case, here is a typical test method in the CommentTestCase class function testMsgNotificationUserComment(){ $user_id = '1'; $submission_id = '1'; $parent_id = $this-Comment-commentOnModel('Submission', $submission_id, '0', $user_id, "Says: A"); $other_user_id = '2'; $msg_id = $this-Comment-commentOnModel('Submission', $submission_id, $parent_id, $other_user_id, "Says: B"); $expected = array(array('Comment'=array('id'=$msg_id, 'text'="Says: B", 'submission_id'=$submission_id, 'topic_id'='0', 'ack'='0'))); $result = $this-Comment-getMessages($user_id); $this-assertEqual($result, $expected); } I've been dealing with this for a day now and I'm starting to be put off by CakePHP's unit testing. In addition to this issue -- Servral times now I've had data inserted into by 'default' database configuration after running tests! What's going on with my configuration?!

    Read the article

  • File upload progress

    - by Cornelius
    I've been trying to track the progress of a file upload but keep on ending up at dead ends (uploading from a C# application not a webpage). I tried using the WebClient as such: class Program { static volatile bool busy = true; static void Main(string[] args) { WebClient client = new WebClient(); // Add some custom header information client.Credentials = new NetworkCredential("username", "password"); client.UploadProgressChanged += client_UploadProgressChanged; client.UploadFileCompleted += client_UploadFileCompleted; client.UploadFileAsync(new Uri("http://uploaduri/"), "filename"); while (busy) { Thread.Sleep(100); } Console.WriteLine("Done: press enter to exit"); Console.ReadLine(); } static void client_UploadFileCompleted(object sender, UploadFileCompletedEventArgs e) { busy = false; } static void client_UploadProgressChanged(object sender, UploadProgressChangedEventArgs e) { Console.WriteLine("Completed {0} of {1} bytes", e.BytesSent, e.TotalBytesToSend); } } The file does upload and progress is printed out but the progress is much faster than the actual upload and when uploading a large file the progress will reach the maximum within a few seconds but the actual upload takes a few minutes (it is not just waiting on a response, all the data have not yet arrived at the server). So I tried using HttpWebRequest to stream the data instead (I know this is not the exact equivalent of a file upload as it does not produce multipart/form-data content but it does serve to illustrate my problem). I set AllowWriteStreamBuffering to false and set the ContentLength as suggested by this question/answer: class Program { static void Main(string[] args) { FileInfo fileInfo = new FileInfo(args[0]); HttpWebRequest client = (HttpWebRequest)WebRequest.Create(new Uri("http://uploadUri/")); // Add some custom header info client.Credentials = new NetworkCredential("username", "password"); client.AllowWriteStreamBuffering = false; client.ContentLength = fileInfo.Length; client.Method = "POST"; long fileSize = fileInfo.Length; using (FileStream stream = fileInfo.OpenRead()) { using (Stream uploadStream = client.GetRequestStream()) { long totalWritten = 0; byte[] buffer = new byte[3000]; int bytesRead = 0; while ((bytesRead = stream.Read(buffer, 0, buffer.Length)) > 0) { uploadStream.Write(buffer, 0, bytesRead); uploadStream.Flush(); Console.WriteLine("{0} of {1} written", totalWritten += bytesRead, fileSize); } } } Console.WriteLine("Done: press enter to exit"); Console.ReadLine(); } } The request does not start until the entire file have been written to the stream and already shows full progress at the time it starts (I'm using fiddler to verify this). I also tried setting SendChunked to true (with and without setting the ContentLength as well). It seems like the data still gets cached before being sent over the network. Is there something wrong with one of these approaches or is there perhaps another way I can track the progress of file uploads from a windows application?

    Read the article

  • Windows 2008 DHCP service fails - "...failed to see a directory server for authorization."

    - by ewwhite
    I have a small environment running Windows 2008 R2 where the DHCP service on the domain controller fails every two weeks. The most-visible error is Event ID 1059 and the Event Viewer message is: "The DHCP service failed to see a directory server for authorization." The setup features two domain controller and the usual services and roles (file, print, Exchange). Restarting the service fails for a variety of reasons. I've had the following messages at different times: "Not enough storage is available to complete this operation". "Unable to determine the DHCP Server version for the Server 192.168.x.x" "The DHCP service has detected that it is running on a DC and has no credentials configured for use with Dynamic DNS registrations initiated by the DHCP service." A reboot of the domain controller resolves the issue for ~2 weeks. The systems are virtualized and there are no network connectivity issues. Any ideas what's happening here?

    Read the article

  • Incremental backups in Quickbooks 2005

    - by Nathan DeWitt
    My church uses Quickbooks 2005. They have a backup to a 512 MB thumbdrive. They have been backing up about every week for the past 18 months. The filesize of the backups have grown from 14 MB to about 23 MB. I was planning on giving them a 1 or 2 GB thumb drive and calling it a day, but when I dumped this info into Excel and projected out the growth rate, I found that we'll hit 1 GB in July, and 10 GB in about another 18 months, and then 100 GB about 18 months after that. It looks to me like Quickbooks saves all the transactions with every backup. Is there a way to force incremental backups? If this is the way it is, that's fine, but I'd rather not keep buying another order of magnitude of storage space every 18 months. Can I safely delete the previous backups, and just keep the recent 2 or 3 months worth? Thanks.

    Read the article

< Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >