Search Results

Search found 72370 results on 2895 pages for 'file moving'.

Page 159/2895 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • Ping remote server and wait to get data

    - by infinity
    Hi I'm building my first application for android and I've reached a point where I can't find a solution even have no idea what to search for in Google. So the problem: I am pinging a remote server with GET request through the application passing some parameters like file_id. Then the server gives back confirmation if the file exists or error otherwise, both in plain text. The error string is $$$ERROR$$$. Actually the confirmation is JSON string that holds the path to the file. If the file doesn't exists on the server it generated the error message and start downloading the file and processing it which normally takes 10-30 seconds. What would be the best way to check if the file is ready for download? I have DownloadFile class that extends AsyncTask but before I reach the point to download the file I need the URL which is dependant on the previous request which is in the main class in the UI thread. Here is some code: public class MainActivity extends Activity { private String getInfo() { // Create a new HttpClient and Post Header HttpClient httpClient = new DefaultHttpClient(); HttpGet httpPost = new HttpGet(infoUrl); StringBuilder sb = null; String data; JSONObject jObject = null; try { HttpResponse response = httpClient.execute(httpPost); // This might be equal "$$$ERROR$$$" if no file exists sb = inputStreamToString(response.getEntity().getContent()); } catch(ClientProtocolException e) { // TODO Auto-generated catch block Log.v("Error: pushItem ClientProtocolException: ", e.toString()); } catch (IOException e) { // TODO Auto-generated catch block Log.v("Error: pushItem IOException: ", e.toString()); } // Clean the data to be complaint JSON format data = sb.toString().replace("info = ", ""); try { jObject = new JSONObject(data); data = jObject.getString("h"); fileTitle = jObject.getString("title"); } catch (JSONException e) { // TODO Auto-generated catch block e.printStackTrace(); } downloadUrl = String.format(downloadUrl, fileId, data); return downloadUrl; } } So my idea was to get the content and if equal to $$$ERROR$$$ go into loop until JSON data is passed but I guess there is better solution. Note: I don't have control over the server output so have to deal with what I have.

    Read the article

  • What cause high CPU usage on the server during file upload

    - by bosiang
    When I try to upload a huge file size (approx 2GB), the server cpu usage goes really high. What should I do to fix this? I just use standard html form and php, for file upload. I'm sorry if I post on the wrong forum. Please point me to the right direction here is the result of "top" command during uploading 4 files (18mb, 38mb, 60mb, 33mb) 1904 apache 20 0 33504 5740 1952 R 28.3 0.2 0:02.19 httpd 1905 apache 20 0 33504 5740 1952 R 28.3 0.2 0:01.99 httpd 1903 apache 20 0 33232 6968 3060 R 28.0 0.2 0:01.98 httpd 1910 apache 20 0 33240 6020 2248 S 11.5 0.2 0:02.85 httpd 2133 root 20 0 2656 1124 896 R 1.6 0.0 0:00.71 top 1 root 20 0 2864 1404 1188 S 0.0 0.0 0:03.99 init the code for chunking, although eventhough I don't use this code (just simple file upload), it still cause that high cpu usage function sendRequest() { //clean the screen //bars.innerHTML = ''; var file = document.getElementById('fileToUpload'); for(var i = 0; i < file.files.length; i++) { var blob = file.files[i]; var originalFileName = blob.name; var filePart = 0 const BYTES_PER_CHUNK = 100 * 1024 * 1024; // 10MB chunk sizes. var realFileSize = blob.size; var start = 0; var end = BYTES_PER_CHUNK; totalChunks = Math.ceil(realFileSize / BYTES_PER_CHUNK); alert(realFileSize); while( start < realFileSize ) { if (blob.webkitSlice) { //for Google Chrome var chunk = blob.webkitSlice(start, end); } else if (blob.mozSlice) { //for Mozilla Firefox var chunk = blob.mozSlice(start, end); } uploadFile(chunk, originalFileName, filePart, totalChunks, i); filePart++; start = end; end = start + BYTES_PER_CHUNK; } } }

    Read the article

  • jquery ajax form success callback not being called

    - by Michael Merchant
    I'm trying to upload a file using "AJAX", process data in the file and then return some of that data to the UI so I can dynamically update the screen. I'm using the JQuery Ajax Form Plugin, jquery.form.js found at http://jquery.malsup.com/form/ for the javascript and using Django on the back end. The form is being submitted and the processing on the back end is going through without a problem, but when a response is received from the server, my Firefox browser prompts me to download/open a file of type "application/json". The file has the json content that I've been trying to send to the browser. I don't believe this is an issue with how I'm sending the json as I have a modularized json_wrapper() function that I'm using in multiple places in this same application. Here is what my form looks after Django templates are applied: <form method="POST" enctype="multipart/form-data" action="/test_suites/active/upload_results/805/"> <p> <label for="id_resultfile">Upload File:</label> <input type="file" id="id_resultfile" name="resultfile"> </p> </form> You won't see any submit buttons because I'm calling submit with a button else where and am using ajaxSubmit() from the jquery.form.js plugin. Here is the controlling javascript code: function upload_results($dialog_box){ $form = $dialog_box.find("form"); var options = { type: "POST", success: function(data){ alert("Hello!!"); }, dataType: "json", error: function(){ console.log("errors"); }, beforeSubmit: function(formData, jqForm, options){ console.log(formData, jqForm, options); }, } $form.submit(function(){ $(this).ajaxSubmit(options); return false; }); $form.ajaxSubmit(options); } As you can see, I've gotten desperate to see the success callback function work and simply have an alert message created on success. However, we never reach that call. Also, the error function is not called and the beforeSubmit function is executed. The file that I get back has the following contents: {"count": 18, "failed": 0, "completed": 18, "success": true, "trasaction_id": "SQEID0.231"} I use 'success' here to denote whether or not the server was able to run the post command adequately. If it failed the result would look something like: {"success": false, "message":"<error_message>"} Your time and help is greatly appreciated. I've spent a few days on this now and would love to move on.

    Read the article

  • Strange network issue (ZIP file fails CRC test over VPN)

    - by Joe Schmoe
    We have a server in the office running Windows Server 2003 Our office is connected to our datacenter via hardware VPN (Linksys RV082 router in the office to CISCO router in the datacenter). There is a job that runs on the server in the office that does following: ZIP certain files from the server using 7Zip, copy ZIP file to a network share in the office and verify ZIP integrity, copy ZIP file to a network share in the data center and verify ZIP integrity. Problem is - verifying ZIP integrity for the file in the data center always fails. However, if I run 7Zip on the server in data center that exposes that share ZIP file verifies just fine, so it is not actually corrupted during copy operation. Additionally, I tried running ZIP on other computers in the office to verify ZIP file on datacenter file share and it verifies OK. I tried plugging server to the same network port where my workstation is connected using different cable (my workstation doesn't exhibit this problem) and ZIP verification still fails. So the problem is local to that specific server. On network adapter properties for the server in question there is no "Advanced" tab where one can usually configure a lot of network settings. Network card driver is up to date (Windows Update doesn't find anything newer and Lenovo website doesn't have any drivers for Windows 2003 for this computer model). Is there any other way to configure network setting via command line? What settings could be relevant to this problem?

    Read the article

  • MaxTotalSizeInBytes - Blind spots in Usage file and Web Analytics Reports

    - by Gino Abraham
    Originally posted on: http://geekswithblogs.net/GinoAbraham/archive/2013/10/28/maxtotalsizeinbytes---blind-spots-in-usage-file-and-web-analytics.aspx http://blogs.msdn.com/b/sharepoint_strategery/archive/2012/04/16/usage-file-and-web-analytics-reports-with-blind-spots.aspx In my previous post (Troubleshooting SharePoint 2010 Web Analytics), I referenced a problem that can occur when exceeding the daily partition size for the LoggingDB, which generates the ULS message “[Partition] has exceeded the max bytes”. Below, I wanted to provide some additional info on this particular issue and help identify some options if this occurs. As an aside, this post only applies if you are missing portions of Usage data - think blind spots on intermittent days or user activity regularly sparse for the afternoon/evening. If this fits your scenario - read on. But if Usage logs are outright missing, go check out my Troubleshooting post first.  Background on the problem:The LoggingDB database has a default maximum size of ~6GB. However, SharePoint evenly splits this total size into fixed sized logical partitions – and the number of partitions is defined by the number of days to retain Usage data (by default 14 days). In this case, 14 partitions would be created to account for the 14 days of retention. If the retention were halved to 7 days, the LoggingDBwould be split into 7 corresponding partitions at twice the size. In other words, the partition size is generally defined as [max size for DB] / [number of retention days].Going back to the default scenario, the “max size” for the LoggingDB is 6200000000 bytes (~6GB) and the retention period is 14 days. Using our formula, this would be [~6GB] / [14 days], which equates to 444858368 bytes (~425MB) per partition per day. Again, if the retention were halved to 7 days (which halves the number of partitions), the resulting partition size becomes [~6GB] / [7 days], or ~850MB per partition.From my experience, when the partition size for any given day is exceeded, the usage logging for the remainder of the day is essentially thrown away because SharePoint won’t allow any more to be written to that day’s partition. The only clue that this is occurring (beyond truncated usage data) is an error such as the following that gets reported in the ULS:04/08/2012 09:30:04.78    OWSTIMER.EXE (0x1E24)    0x2C98    SharePoint Foundation    Health    i0m6     High    Table RequestUsage_Partition12 has 444858368 bytes that has exceeded the max bytes 444858368It’s also worth noting that the exact bytes reported (e.g. ‘444858368’ above) may slightly vary among farms. For example, you may instead see 445226812, 439123456, or something else in the ballpark. The exact number itself doesn't matter, but this error message intends to indicates that the reporting usage has exceeded the partition size for the given day.What it means:The error itself is easy to miss, which can lead to substantial gaps in the reporting data (your mileage may vary) if not identified. At this point, I can only advise to periodically check the ULS logs for this message. Down the road, I plan to explore if [Developing a Custom Health Rule] could be leveraged to identify the issue (If you've ever built Custom Health Rules, I'd be interested to hear about your experiences). Overcoming this issue also poses a challenge, with workaround options including:Lower the retentionBecause the partition size is generally defined as [max size] / [number of retention days], the first option is to lower the number of days to retain the data – the lower the retention, the lower the divisor and thus a bigger partition. For example, halving the retention from 14 to 7 days would halve the number of partitions, but double the partition size to ~850MB (e.g. [6200000000 bytes] / [7 days] = ~850GB partitions). Lowering it to 2 days would result in two ~3GB partitions… and so on.Recreate the LoggingDB with an increased sizeThe property MaxTotalSizeInBytes is exposed by OM code for the SPUsageDefinition object and can be updated with the example PowerShell snippet below. However, updating this value has no immediate impact because this size only applies when creating a LoggingDB. Therefore, you must create a newLoggingDB for the Usage Service Application. The gotcha: this effectively deletes all prior Usage databecause the Usage Service Application can only have a single LoggingDB.Here is an example snippet to update the "Page Requests" Usage Definition:$def=Get-SPUsageDefinition -Identity "page requests" $def.MaxTotalSizeInBytes=12400000000 $def.update()Create a new Logging database and attach to the Usage Service Application using the following command: Get-spusageapplication | Set-SPUsageApplication -DatabaseServer <dbServer> -DatabaseName <newDBname> Updated (5/10/2012): Once the new database has been created, you can confirm the setting has truly taken by running the following SQL Query (be sure to replace the database name in the following query with the name provided in the PowerShell above)SELECT * FROM [WSS_UsageApplication].[dbo].[Configuration] WITH (nolock) WHERE ConfigName LIKE 'Max Total Bytes - RequestUsage'

    Read the article

  • perl comparing 2 data file as array 2D for finding match one to one [migrated]

    - by roman serpa
    I'm doing a program that uses combinations of variables ( combiData.txt 63 rows x different number of columns) for analysing a data table ( j1j2_1.csv, 1000filas x 19 columns ) , to choose how many times each combination is repeated in data table and which rows come from (for instance, tableData[row][4]). I have tried to compile it , however I get the following message : Use of uninitialized value $val in numeric eq (==) at rowInData.pl line 34. Use of reference "ARRAY(0x1a2eae4)" as array index at rowInData.pl line 56. Use of reference "ARRAY(0x1a1334c)" as array index at rowInData.pl line 56. Use of uninitialized value in subtraction (-) at rowInData.pl line 56. Modification of non-creatable array value attempted, subscript -1 at rowInData.pl line 56. nothing This is my code: #!/usr/bin/perl use strict; use warnings; my $line_match; my $countTrue; open (FILE1, "<combiData.txt") or die "can't open file text1.txt\n"; my @tableCombi; while(<FILE1>) { my @row = split(' ', $_); push(@tableCombi, \@row); } close FILE1 || die $!; open (FILE2, "<j1j2_1.csv") or die "can't open file text1.txt\n"; my @tableData; while(<FILE2>) { my @row2 = split(/\s*,\s*/, $_); push(@tableData, \@row2); } close FILE2 || die $!; #function transform combiData.txt variable (position ) to the real value that i have to find in the data table. sub trueVal($){ my ($val) = $_[0]; if($val == 7){ return ('nonsynonymous_SNV'); } elsif( $val == 14) { return '1'; } elsif( $val == 15) { return '1';} elsif( $val == 16) { return '1'; } elsif( $val == 17) { return '1'; } elsif( $val == 18) { return '1';} elsif( $val == 19) { return '1';} else { print 'nothing'; } } #function IntToStr ( ) , i'm not sure if it is necessary) that transforms $ to strings , to use the function <eq> in the third loop for the array of combinations compared with the data array . sub IntToStr { return "$_[0]"; } for my $combi (@tableCombi) { $line_match = 0; for my $sheetData (@tableData) { $countTrue=0; for my $cell ( @$combi) { #my $temp =\$tableCombi[$combi][$cell] ; #if ( trueVal($tableCombi[$combi][$cell] ) eq $tableData[$sheetData][ $tableCombi[$combi][$cell] - 1 ] ){ #if ( IntToStr(trueVal($$temp )) eq IntToStr( $tableData[$sheetData][ $$temp-1] ) ){ if ( IntToStr(trueVal($tableCombi[$combi][$cell]) ) eq IntToStr($tableData[$sheetData][ $tableCombi[$combi][$cell] -1]) ){ $countTrue++;} if ($countTrue==@$combi){ $line_match++; #if ($line_match < 50){ print $tableData[$sheetData][4]." "; #} } } } print $line_match." \n"; }

    Read the article

  • Create files on C:\ root gives error 0x80070522

    - by Bryan
    One of our customers has just found a problem when trying to create a file on the root of the C:\ Drive, on a Windows 7 Professional PC. I know they shouln't be keeping files here, but there is a valid reason in this case, so I've relaxed the security on the root of C:\ by giving the group 'users' modify permission. Before I relaxed the security, the user was receiving 'access denied', but now they are receiving the message: An unexpected error is keeping you from creating the file. If you continue to recieve this error, you can use the error code to search for help with this problem. Error 0x80070522: A required priviledge is not held by the client. Googling for this suggests that it is caused by UAC, but how can I get round this when the user doesn't have admin rights on their PC?

    Read the article

  • Windows Server 2008 CMD Task Schedule not running

    - by Jonathan Platovsky
    I have a BAT/CMD file that when run from the command prompt runs completely. When I run it through the Task Scheduler it partially runs. Here is a copy of the file cd\sqlbackup ren Apps_Backup*.* Apps.Bak ren Apps_Was_Backup*.* Apps_Was.Bak xcopy /Y c:\sqlbackup\*.bak c:\sqlbackup\11\*.bak xcopy /y c:\sqlbackup\*.bak \\igweb01\c$\sqlbackup\*.bak Move /y c:\sqlbackup\*.bak "\\igsrv01\d$\sql backup\" The last two lines do not run when the task scheduler calls it. But again, work when manually run from the command line. All the local sever commands run but when it comes to the last two lines where it goes to another server then it does not work.

    Read the article

  • Can't copy files with 'additional permissions' to ext4 drive -- files that have @ after permissions,

    - by 99miles
    I am copying files from Snow Leopard to a mounted ext4 share via Samba, that's on a Fedora machine. Some files cannot be copied, and give this error: The operation can’t be completed because you don’t have permission to access some of the items. I've noticed that the files that can't be copied have an @ at the end of their permissions whien I do 'ls -l' in the command line. For example, I can copy the second file but not the first: -rwxrwxrwx@ 1 miles staff 1448 May 14 22:55 test.txt -rw-r--r-- 1 miles staff 136 Apr 5 17:06 image.psd.zip From what I've found, the @ means the file has 'additional properties'. Does anyone know how I can resolve this issue so I can copy the files to the fileshare?? Thanks!

    Read the article

  • Why is the XML DTD not found by the browser

    - by hyperuser
    When I load my XML file in a browser, it complains there is 'no style information': "This XML file does not appear to have any style information associated with it. The document tree is shown below." So I wrote an external DTD, then an internal DTD, but keep getting the same 'no style information' error. It doesn't even show the DTD! What am I doing wrong? <?xml version="1.0"?> <!DOCTYPE fotos [ <!ELEMENT fotos (titel,auteur)> <!ELEMENT titel (#PCDATA)> <!ELEMENT auteur (#PCDATA)> ]> <fotos> <titel>titel1</titel> <auteur>jan</auteur> </fotos>

    Read the article

  • Can connect to Samba server but cannot access shares?

    - by jlego
    I have setup a stand-alone box running Fedora 16 to use as a file-sharing and web development server. Needs to be able to share files with a PC running Windows 7 and a Mac running OSX Snow Leopard. I've setup Samba using the Samba configuration GUI tool. Added users to Fedora and connected them as Samba users (which are the same as the Windows and Mac usernames and passwords). The workgroup name is the same as the Windows workgroup. Authentication is set to User. I've allowed Samba and Samba client through the firewall and set the ethernet to a trusted port in the firewall. Both the Windows and Mac machines can connect to the server and view the shares, however when trying to access the shares, Windows throws error 0x80070035 " Windows cannot access \SERVERNAME\ShareName." Windows user is not prompted for a username or password when accessing the server (found under "Network Places"). This also happens when connecting with the IP rather than the server name. The Mac can also connect to the server and see the shares but when choosing a share gives the error "The original item for ShareName cannot be found." When connecting via IP, the Mac user is prompted for username and password, which when authenticated gives a list of shares, however when choosing a share to connect to, the error is displayed and the user cannot access the share. Since both machines are acting similarly when trying to access the shares, I assume it is an issue with how Samba is configured. smb.conf: [global] workgroup = workgroup server string = Server log file = /var/log/samba/log.%m max log size = 50 security = user load printers = yes cups options = raw printcap name = lpstat printing = cups [homes] comment = Home Directories browseable = no writable = yes [printers] comment = All Printers path = /var/spool/samba browseable = yes printable = yes [FileServ] comment = FileShare path = /media/FileServ read only = no browseable = yes valid users = user1, user2 [webdev] comment = Web development path = /var/www/html/webdev read only = no browseable = yes valid users = user1 How do I get samba sharing working? UPDATE: Before this box I had another box with the same version of fedora installed (16) and samba working for these same computers. I started up the old machine and copied the smb.conf file from the old machine to the new one (editing the share definitions for the new shares of course) and I still get the same errors on both client machines. The only difference in environment is the hardware and the router. On the old machine the router received a dynamic public IP and assigned dynamic private IPs to each device on the network while the new machine is connected to a router that has a static public IP (still dynamic internal IPs though.) Could either one of these be affecting Samba? UPDATE 2: As the directory I am trying to share is actually an entire internal disk, I have tried to things: 1.) changing the owner of the mounted disk from root to my user (which is the same username as on the Windows machine) 2.) made a share that only included one of the folders on the disk instead of the entire disk with my user again as the owner. Both tests failed giving me the same errors regarding the network address. UPDATE 3: Not sure exactly what I did, but now whenever I try to connect to the share on the Windows 7 client I am prompted for my username and password. When I enter the correct credentials I get an access denied message. However I did notice that under the login box "domain: WINDOWS-PC-NAME" is listed. I believe this could very well be the problem. Any suggestions? UPDATE 4: So I've completely reinstalled Fedora and Samba now. I've created a share on the first harddrive (one fedora is installed on) and I can access that fine from Windows. However when I try to share any data on the second disk, I am receiving the same error. This I believe is the problem. I think I need to change some things in fstab or fdisk or something. UPDATE 5: So in fstab I mapped the drive to automount in a folder which works correctly. I also added the samba_share_t SElinux label to the mountpoint directory which now allows me to access the shares on the Windows machine, however I cannot see any of the files in the directory on the windows machine. (They are there, I can see them in the fedora file browser locally) UPDATE 6: Figured it out. See answer below

    Read the article

  • Recovering database files from a corrupted VHD

    - by Apocalypse9
    We have a SQL server hosted on a virtual machine. Our hosting company updated/restarted the server and for some reason the virtual machines became unbootable. We've spoken to Microsoft and used a few higher level tools to attempt to recover the virtual machines but were unsuccessful. In browsing the file system the database folder doesn't even appear. I'm wondering if there are any lower level tools that might be able to find and copy the database files. As far as I know the physical hard drive is ok, so I'm hoping there may be some way to recover the files themselves even if the rest of the virtual machine file-system is a loss. Obviously we're in a bit of a bind, and any help/ suggestions are very much appreciated.

    Read the article

  • find files where group permissions equal user permissions

    - by Jayen
    Is it possible to do something like find -perm g=u? I say "like" because -perm mode requires mode to specify all the bits, not just g, and because I can't put u on the right side of the =, like I can with the chmod command: you can specify exactly one of the letters ugo: the permissions granted to the user who owns the file (u), the permissions granted to other users who are members of the file's group (g), and the permissions granted to users that are in neither of the two preceding categories (o). At the moment, I'm doing find | xargs -d \\n ls -lartd | egrep '^.(...)\1 which is just ugly. Thanks.

    Read the article

  • Excel 2003 opening files on network

    - by Luke
    The network is laid out with an XP Pro computer as the server hosting files, then 3 XP computers connecting to it for filesharing, all on it's own router. One computer can open .xls files no problem, and she runs Office XP. The other two computers run Office 2003, and cannot open any shared files by double-clicking them, or by selecting File-Open in Excel. If the file gets copied to the local computer, it opens instantly. I have tried disabling the AV on all computers, disabling the Windows firewall, and doublechecking permissions on the server. I have also tried disabling DDE, but that doesn't help at all, just like Tools-Options-unticking Ignore other applications. Any ideas? This apparently started a couple days ago

    Read the article

  • Configuring SMB shares in OS X

    - by Craig Walker
    I'm at my wit's end trying to control SMB file sharing on my Mac. (OS X 10.5 Leopard). I want to do something fairly simple: share a particular (non-home, non-Public) folder over my my SMB/Windows network with two users (accounts are local to my Mac), and share no other folders with anyone. The instructions on the internet are fairly straightforward: add the folders to be shared to the File Sharing panel of the Sharing System Preferences pane: ..and ensure that I'm sharing through SMB: However, when I actually try to connect via a SMB client (Windows XP in this case), the share does not appear. I see my home directory, "Macintosh HD", and my printers, but not the folder I just shared. I ensured that the underlying directory had the proper permissions (since this seems to affect share visibility) and that the "Shared Folder" checkbox was checked: ...but this didn't have any effect. I checked /etc/smb.conf but there was nothing obviously out of place there. I've also restarted smbd and rebooted. What else should I be looking for?

    Read the article

  • Scheduled task to map a network drive runs, but doesn't map the drive

    - by bikefixxer
    I have a task set up to run whenever the computer is logged onto that deletes all network folders and maps a network drive. Here is what is in the batch file: @echo off net use * /delete /y net use b: \\Server\Share /user:DOMAIN\Username password exit When the computer is restarted or logged off and back on, the task runs fine (according to the scheduled tasks window saying when it ran last) but the mapped drive doesn't show up. I'll open the command prompt and type "net use" and it simply says "There are no entries in the list". If I then right click on the task and run it, it works and the mapped drive shows up. I've checked the log and nothing shows up. I've tried adding a timer in the batch file so it waits 10 seconds (ping 1.1.1.1 -n 1 -w 10000nul) thinking that maybe the network wasn't connected, but that didn't work. What else can I try? Thanks!

    Read the article

  • Active Desktop cannot be restored on Win XP

    - by Phil.Wheeler
    This is more of a major annoyance than anything that's stopping me from doing my work, but I somehow seem to have had Active Desktop on my work XP machine get corrupted and now can't get it back working again. I've tried browsing to C:\Documents and Settings\%my-user-name%\Application Data\Microsoft\Internet Explorer and changing, deleting or replacing the Desktop.htt file, but it's not achieving anything. The error I was previously getting when trying to click the "Restore my active desktop" button was: Internet Explorer Script Error An error has occurred in the script on this page. Line: 65 Char: 1 Error: Object doesn't support this action Code: 0 URL: file://C:/Documents%20and%20Settings//Application %20Data/Microsoft/Internet%20Explorer/Desktop.htt Any ideas?

    Read the article

  • Postfix configuration problem

    - by dhanya
    Can anyone help me by giving your postfix configuration file as a reference so that I can find my mistakes? I'm working on SUSE Linux Enterprise Server. My goal is to set up a mailserver in a campus network. Postfix shows it is running but no mail is sent to var/spool/mail I send mail using mail command at terminal. Here is my main.cf file, please help me finding a solution: readme_directory = /usr/share/doc/packages/postfix-doc/README_FILES inet_protocols = all biff = no mail_spool_directory = /var/mail canonical_maps = hash:/etc/postfix/canonical virtual_alias_maps = hash:/etc/postfix/virtual virtual_alias_domains = hash:/etc/postfix/virtual relocated_maps = hash:/etc/postfix/relocated transport_maps = hash:/etc/postfix/transport sender_canonical_maps = hash:/etc/postfix/sender_canonical masquerade_exceptions = root masquerade_classes = envelope_sender, header_sender, header_recipient myhostname = cmail.cetmail delay_warning_time = 1h message_strip_characters = \0 program_directory = /usr/lib/postfix inet_interfaces = all #inet_interfaces = 127.0.0.1 masquerade_domains = cetmail mydestination = cmail.cetmail, localhost.cetmail, cetmail defer_transports = mynetworks_style = subnet disable_dns_lookups = no relayhost = postfix mailbox_command = cyrus mailbox_transport = strict_8bitmime = no disable_mime_output_conversion = no smtpd_sender_restrictions = hash:/etc/postfix/access smtpd_client_restrictions = smtpd_helo_required = no smtpd_helo_restrictions = strict_rfc821_envelopes = no smtpd_recipient_restrictions = permit_mynetworks,reject_unauth_destination smtp_sasl_auth_enable = no smtpd_sasl_auth_enable = no smtpd_use_tls = no smtp_use_tls = no alias_maps = hash:/etc/aliases mailbox_size_limit = 0 message_size_limit = 10240000

    Read the article

  • Adding lines to /etc/profile with puppet?

    - by miku
    I use puppet to install a current JDK and tomcat. package { [ "openjdk-6-jdk", "openjdk-6-doc", "openjdk-6-jre", "tomcat6", "tomcat6-admin", "tomcat6-common", "tomcat6-docs", "tomcat6-user" ]: ensure => present, } Now I'd like to add JAVA_HOME="/usr/lib/java" export JAVA_HOME to /etc/profile, just to get this out of the way. I haven't found a straightforward answer in the docs, yet. Is there a recommended way to do this? In general, how do I tell puppet to place this file there or modify that file? I'm using puppet for a single node (in standalone mode) just to try it out and to keep a log of the server setup.

    Read the article

  • Recovery of Pinnacle Studio Project Files

    - by seanieb
    My external hard drive had some sort of issue a few months ago, but I was able to recover my files using a data recovery software program. However my Pinnacle studio files are not being recovered as before, they are being recovered as directory's/folders that have sub directory's and files. And I have tried with several different recovery programs and they all recover the projects as directories. And the projects all contain one file called README.TXT: * WARNING This directory contains the descriptive data of the project, split into. various subdirectories and files for better access. DO NOT EDIT, ADD, CHANGE OR MODIFY ANY OF IT'S CONTENTS! This gives me hope that I could some how just turn the directory into a .stu Pinnacle studio project file. How would I go about doing this? Or is there another way to solve this problem?

    Read the article

  • What is the command to check if a command's results mention OK?

    - by Manuel
    Alright, so I was playing around with changing MTU size and wanted to make a batch file to automatically lower it and then raise it later. This is probably simple, but I just can't figure it out. Point is, is there a way to run a command, which would normally echo out "ok" but check to see if it does say ok? And if it doesn't say ok then, to end the rest of the file from running and exit out. The command I'm using is netsh interface ipv4 set subinterface "Local Area Connection" mtu=386 store=persistent which, as I mentioned above prints out an OK. I just want to check if it did run correctly, and if not, then do __

    Read the article

  • How to Automate .inf PnP Windows 7 Drivers with Wildcard?

    - by Dos_Probie
    I am trying to automate the PnP driver installs for Windows 7 with either of the below batch files via For loop and wildcard for the .inf file. The rundll32 batch reads and echos the correct .inf file but then gives me the "Error Installation failed" and with the pnputil batch it runs without any error put does not install. How can I correct the batch files to install the inf correctly? @echo off&color a&setlocal enabledelayedexpansion cd %~dp0 set PnP=rundll32 syssetup,SetupInfObjectInstallAction DefaultInstall 128 .\*.inf for /f "delims=" %%a in ('dir/b %PnP%') do ( echo == Installing PnP Drivers == "%%a" ::or set PnP=pnputil -i -a "*.inf" for /f "delims=" %%a in ('dir/b %PnP%') do ( echo == Installing PnP Drivers == "%%a" ping -n 3 localhost 1>nul start "" /wait %PnP%\%%a ) cls echo. * DONE * pause exit

    Read the article

  • Joining an Active Directory domain using netdom

    - by Cheezo
    I have a simple script to join an AD domain and rename the computer. When I execute these commands directly on the CLI, it works fine. When I execute the same via batch file, I get an error saying The network path was not found I am running as Administrator with full privileges. I have googled around microsoft forums but my case is unique because it works from the CLI and not from the batch file netdom join %%computername%% /domain:OPSCODEDEMO.COM /userd:Administrator /passwordd:xxx netdom renamecomputer %%computername%% /NewName:%hostname% /Force The environment is Windows 2k8 R2 SP1 running on Ninefold Cloud (Xenserver).

    Read the article

  • Openssh sftp-server: .filepart support?

    - by Guillaume Bodi
    I am trying to setup a SFTP server, running off Ubuntu Server 11.04. I installed openssh-server to provide SSH access. What I am trying to do is make file uploads run with a suffix (.filepart or whatever), which would be removed upon transfer completion. The flow idea is: User uploads cat.jpg The server starts writing cat.jpg.filepart in the destination directory Once the upload completes, the server trashes the previous cat.jpg (if any) and renames cat.jpg.filepart to cat.jpg This is to make sure that incomplete file uploads do not overwrite the existing files. Any idea on how I can do this? Thanks

    Read the article

  • Open Microsoft Publisher Document on Linux

    - by Peter
    I'm pretty sure the options consist of Just don't do it (use a nice open standard file format). Not great when someone sends you something. Translate the format on Windows. I think you need Publisher, the viewer won't even print. But you can download a trial version for a once off (been there, done that). Submit the file for online translation to PDF. www.pdfonline.com/convert-pdf/ Use a Windows VM, wine, crossover office, Win4Lin, or otherwise run Publisher "under" linux. What I really want to do is convert it to something nicer natively under Linux.

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >