Search Results

Search found 49453 results on 1979 pages for 'memory mapped files'.

Page 526/1979 | < Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >

  • git filter-branch chmod

    - by Evan Purkhiser
    I accidental had my umask set incorrectly for the past few months and somehow didn't notice. One of my git repositories has many files marked as executable that should be just 644. This repo has one main master branch, and about 4 private feature branches (that I keep rebased on top of the master). I've corrected the files in my master branch by running find -type f -exec chmod 644 {} \; and committing the changes. I then rebased my feature branches onto master. The problem is there are newly created files in the feature branches that are only in that branch, so they weren't corrected by my massive chmod commit. I didn't want to create a new commit for each feature branch that does the same thing as the commit I made on master. So I decided it would be best to go back through to each commit where a file was made and set the permissions. This is what I tried: git filter-branch -f --tree-filter 'chmod 644 `git show --diff-filter=ACR --pretty="format:" --name-only $GIT_COMMIT`; git add .' master.. It looked like this worked, but upon further inspection I noticed that the every commit after a commit containing a new file with the proper permissions of 644 would actually revert the change with something like: diff --git a b old mode 100644 new mode 100755 I can't for the life of me figure out why this is happening. I think I must be mis-understanding how git filter-branch works. My Solution I've managed to fix my problem using this command: git filter-branch -f --tree-filter 'FILES="$FILES "`git show --diff-filter=ACMR --pretty="format:" --name-only $GIT_COMMIT`; chmod 644 $FILES; true' development.. I keep adding onto the FILES variable to ensure that in each commit any file created at some point has the proper mode. However, I'm still not sure I really understand why git tracks the file mode for each commit. I had though that since I had fixed the mode of the file when it was first created that it would stay that mode unless one of my other commits explicit changed it to something else. That did not appear to the be the case. The reason I thought that this would work is from my understanding of rebase. If I go back to HEAD~5 and change a line of code, that change is propagated through, it doesn't just get changed back in HEAD~4.

    Read the article

  • node.js callback getting unexpected value for variable

    - by defrex
    I have a for loop, and inside it a variable is assigned with var. Also inside the loop a method is called which requires a callback. Inside the callback function I'm using the variable from the loop. I would expect that it's value, inside the callback function, would be the same as it was outside the callback during that iteration of the loop. However, it always seems to be the value from the last iteration of the loop. Am I misunderstanding scope in JavaScript, or is there something else wrong? The program in question here is a node.js app that will monitor a working directory for changes and restart the server when it finds one. I'll include all of the code for the curious, but the important bit is the parse_file_list function. var posix = require('posix'); var sys = require('sys'); var server; var child_js_file = process.ARGV[2]; var current_dir = __filename.split('/'); current_dir = current_dir.slice(0, current_dir.length-1).join('/'); var start_server = function(){ server = process.createChildProcess('node', [child_js_file]); server.addListener("output", function(data){sys.puts(data);}); }; var restart_server = function(){ sys.puts('change discovered, restarting server'); server.close(); start_server(); }; var parse_file_list = function(dir, files){ for (var i=0;i<files.length;i++){ var file = dir+'/'+files[i]; sys.puts('file assigned: '+file); posix.stat(file).addCallback(function(stats){ sys.puts('stats returned: '+file); if (stats.isDirectory()) posix.readdir(file).addCallback(function(files){ parse_file_list(file, files); }); else if (stats.isFile()) process.watchFile(file, restart_server); }); } }; posix.readdir(current_dir).addCallback(function(files){ parse_file_list(current_dir, files); }); start_server(); The output from this is: file assigned: /home/defrex/code/node/ejs.js file assigned: /home/defrex/code/node/templates file assigned: /home/defrex/code/node/web file assigned: /home/defrex/code/node/server.js file assigned: /home/defrex/code/node/settings.js file assigned: /home/defrex/code/node/apps file assigned: /home/defrex/code/node/dev_server.js file assigned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js For those from the future: node.devserver.js

    Read the article

  • Parallel programming in C#

    - by Alxandr
    I'm interested in learning about parallel programming in C#.NET (not like everything there is to know, but the basics and maybe some good-practices), therefore I've decided to reprogram an old program of mine which is called ImageSyncer. ImageSyncer is a really simple program, all it does is to scan trough a folder and find all files ending with .jpg, then it calculates the new position of the files based on the date they were taken (parsing of xif-data, or whatever it's called). After a location has been generated the program checks for any existing files at that location, and if one exist it looks at the last write-time of both the file to copy, and the file "in its way". If those are equal the file is skipped. If not a md5 checksum of both files is created and matched. If there is no match the file to be copied is given a new location to be copied to (for instance, if it was to be copied to "C:\test.jpg" it's copied to "C:\test(1).jpg" instead). The result of this operation is populated into a queue of a struct-type that contains two strings, the original file and the position to copy it to. Then that queue is iterated over untill it is empty and the files are copied. In other words there are 4 operations: 1. Scan directory for jpegs 2. Parse files for xif and generate copy-location 3. Check for file existence and if needed generate new path 4. Copy files And so I want to rewrite this program to make it paralell and be able to perform several of the operations at the same time, and I was wondering what the best way to achieve that would be. I've came up with two different models I can think of, but neither one of them might be any good at all. The first one is to parallelize the 4 steps of the old program, so that when step one is to be executed it's done on several threads, and when the entire of step 1 is finished step 2 is began. The other one (which I find more interesting because I have no idea of how to do that) is to create a sort of worker and consumer model, so when a thread is finished with step 1 another one takes over and performs step 2 at that object (or something like that). But as said, I don't know if any of these are any good solutions. Also, I don't know much about parallel programming at all. I know how to make a thread, and how to make it perform a function taking in an object as its only parameter, and I've also used the BackgroundWorker-class on one occasion, but I'm not that familiar with any of them. Any input would be appreciated.

    Read the article

  • How do I add Objective C code to a FireBreath Project?

    - by jmort253
    I am writing a browser plugin for Mac OS that will place a status bar icon in the status bar, which users can use to interface with the browser plugin. I've successfully built a FireBreath 1.6 project in XCode 4.4.1, and can install it in the browser. However, FireBreath uses C++, whereas a large majority of the existing libraries for Mac OS are written in Objective C. In the /Mac/projectDef.make file, I added the Cocoa Framework and Foundation Framework, as suggested here and in other resources I've found on the Internet: target_link_libraries(${PROJECT_NAME} ${PLUGIN_INTERNAL_DEPS} ${Cocoa.framework} # added line ${Foundation.framework} # added line ) I reran prepmac.sh, expecting a new project to be created in XCode with my .mm files, and .m files; however, it seems that they're being ignored. I only see the .cpp and .h files. I added rules for those in the projectDef.make file, but it doesn't seem to make a difference: file (GLOB PLATFORM RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} Mac/[^.]*.cpp Mac/[^.]*.h Mac/[^.]*.m #added by me Mac/[^.]*.mm #added by me Mac/[^.]*.cmake ) Even if I add the files in manually, I get a series of compilation errors. There are about 20 of them, all related to the file NSObjRuntime.h file: Parse Issue - Expected unqualified-id Parse Issue - Unknown type name 'NSString' Semantic Issue - Use of undeclared identifier 'NSString' Parse Issue - Unknown type name 'NSString' ... ... Semantic Issue - Use of undeclared identifier 'aSelectorName' ... ... Semantic Issue - Use of undeclared identifier 'aClassName' ... It continues like this for some time with similar errors... From what I've read, these errors appear because of dependencies on the Foundation Framework, which I believe I've included in the project. I also tried clicking the project in XCode I'm to the point now where I'm not sure what to try next. People say it's not hard to use Objective C in C/C++ code, but being new to XCode and Objective C might contribute to my confusion. This is only day 4 for me in this new platform. What do I need to do to get XCode to compile the Objective C code? Please remember that I'm a little new to this, so I'd appreciate it if you leave detailed answers as opposed to the vague one-liners that are common in the firebreath tag. I'm just a little in over my head, but if you can get me past this hurdle I'm certain I'll be good to go from there. UPDATE: I edited projects/MyPlugin/CMakeLists.txt and added in the .m and .mm rules there too. after running prepmac.sh, the files are included in the project, but I still get the same compile errors. I moved all the .h files and .mm files from the Obj C code to the MyPlugin root folder and reran the prepmac.sh file. Problem still exists. Same compile errors.

    Read the article

  • get renamed file names of multiple upload form [js array] in codeigniter

    - by artmania
    Hi friends, I use codeigniter. I have a multiple image upload form. The code below is working well for uploading, but I also need to save file names to database. How can I get the names in here? I spent hours & hours :/ but could not sort it :/ Appreciate helps!!! uploadform.php echo form_open_multipart('gallery/upload'); <input type="file" name="photo" size="50" /> <input type="file" name="thumb" size="50" /> <input type="submit" value="Upload" /> </form> I have a controller between form view and model load model (of course : )) but didnt post here because of no need. gallery_model.php function multiple_upload($upload_dir = 'uploads/', $config = array()) { /* Upload */ $CI =& get_instance(); $files = array(); if(empty($config)) { $config['upload_path'] = realpath($upload_dir); $config['allowed_types'] = 'gif|jpg|jpeg|jpe|png'; $config['max_size'] = '2048'; } $CI->load->library('upload', $config); $errors = FALSE; foreach($_FILES as $key => $value) { if( ! empty($value['name'])) { if( ! $CI->upload->do_upload($key)) { $data['upload_message'] = $CI->upload->display_errors(ERR_OPEN, ERR_CLOSE); // ERR_OPEN and ERR_CLOSE are error delimiters defined in a config file $CI->load->vars($data); $errors = TRUE; } else { // Build a file array from all uploaded files $files[] = $CI->upload->data(); } } } // There was errors, we have to delete the uploaded files if($errors) { foreach($files as $key => $file) { @unlink($file['full_path']); } } elseif(empty($files) AND empty($data['upload_message'])) { $CI->lang->load('upload'); $data['upload_message'] = ERR_OPEN.$CI->lang->line('upload_no_file_selected').ERR_CLOSE; $CI->load->vars($data); } else { return $files; } /* ------------------------------- Insert to database */ // problem is here, i need file names to add db. // if there is already same names file at the folder, it rename file itself. so in such case, I need renamed file name :/ } }

    Read the article

  • html5 uploader + jquery drag & drop: how to store file data with FormData?

    - by lauthiamkok
    I am making a html5 drag and drop uploader with jquery, below is my code so far, the problem is that I get an empty array without any data. Is this line incorrect to store the file data - fd.append('file', $thisfile);? $('#div').on( 'dragover', function(e) { e.preventDefault(); e.stopPropagation(); } ); $('#div').on( 'dragenter', function(e) { e.preventDefault(); e.stopPropagation(); } ); $('#div').on( 'drop', function(e){ if(e.originalEvent.dataTransfer){ if(e.originalEvent.dataTransfer.files.length) { e.preventDefault(); e.stopPropagation(); // The file list. var fileList = e.originalEvent.dataTransfer.files; //console.log(fileList); // Loop the ajax post. for (var i = 0; i < fileList.length; i++) { var $thisfile = fileList[i]; console.log($thisfile); // HTML5 form data object. var fd = new FormData(); //console.log(fd); fd.append('file', $thisfile); /* var file = {name: fileList[i].name, type: fileList[i].type, size:fileList[i].size}; $.each(file, function(key, value) { fd.append('file['+key+']', value); }) */ $.ajax({ url: "upload.php", type: "POST", data: fd, processData: false, contentType: false, success: function(response) { // .. do something }, error: function(jqXHR, textStatus, errorMessage) { console.log(errorMessage); // Optional } }); } /*UPLOAD FILES HERE*/ upload(e.originalEvent.dataTransfer.files); } } } ); function upload(files){ console.log('Upload '+files.length+' File(s).'); }; then if I use another method is that to make the file data into an array inside the jquery code, var file = {name: fileList[i].name, type: fileList[i].type, size:fileList[i].size}; $.each(file, function(key, value) { fd.append('file['+key+']', value); }); but where is the tmp_name data inside e.originalEvent.dataTransfer.files[i]? php, print_r($_POST); $uploaddir = './uploads/'; $file = $uploaddir . basename($_POST['file']['name']); if (move_uploaded_file($_POST['file']['tmp_name'], $file)) { echo "success"; } else { echo "error"; } as you can see that tmp_name is needed to upload the file via php... html, <div id="div">Drop here</div>

    Read the article

  • Where do you put your unit test?

    - by soulmerge
    I have found several conventions to housekeeping unit tests in a project and I'm not sure which approach would be suitable for our next PHP project. I am trying to find the best convention to encourage easy development and accessibility of the tests when reviewing the source code. I would be very interested in your experience/opinion regarding each: One folder for productive code, another for unit tests: This separates unit tests from the logic files of the project. This separation of concerns is as much a nuisance as it is an advantage: Someone looking into the source code of the project will - so I suppose - either browse the implementation or the unit tests (or more commonly: the implementation only). The advantage of unit tests being another viewpoint to your classes is lost - those two viewpoints are just too far apart IMO. Annotated test methods: Any modern unit testing framework I know allows developers to create dedicated test methods, annotating them (@test) and embedding them in the project code. The big drawback I see here is that the project files get cluttered. Even if these methods are separated using a comment header (like UNIT TESTS below this line) it just bloats the class unnecessarily. Test files within the same folders as the implementation files: Our file naming convention dictates that PHP files containing classes (one class per file) should end with .class.php. I could imagine that putting unit tests regarding a class file into another one ending on .test.php would render the tests much more present to other developers without tainting the class. Although it bloats the project folders, instead of the implementation files, this is my favorite so far, but I have my doubts: I would think others have come up with this already, and discarded this option for some reason (i.e. I have not seen a java project with the files Foo.java and FooTest.java within the same folder.) Maybe it's because java developers make heavier use of IDEs that allow them easier access to the tests, whereas in PHP no big editors have emerged (like eclipse for java) - many devs I know use vim/emacs or similar editors with little support for PHP development per se. What is your experience with any of these unit test placements? Do you have another convention I haven't listed here? Or am I just overrating unit test accessibility to reviewers?

    Read the article

  • organizing unit test

    - by soulmerge
    I have found several conventions to housekeeping unit tests in a project and I'm not sure which approach would be suitable for our next PHP project. I am trying to find the best convention to encourage easy development and accessibility of the tests when reviewing the source code. I would be very interested in your experience/opinion regarding each: One folder for productive code, another for unit tests: This separates unit tests from the logic files of the project. This separation of concerns is as much a nuisance as it is an advantage: Someone looking into the source code of the project will - so I suppose - either browse the implementation or the unit tests (or more commonly: the implementation only). The advantage of unit tests being another viewpoint to your classes is lost - those two viewpoints are just too far apart IMO. Annotated test methods: Any modern unit testing framework I know allows developers to create dedicated test methods, annotating them (@test) and embedding them in the project code. The big drawback I see here is that the project files get cluttered. Even if these methods are separated using a comment header (like UNIT TESTS below this line) it just bloats the class unnecessarily. Test files within the same folders as the implementation files: Our file naming convention dictates that PHP files containing classes (one class per file) should end with .class.php. I could imagine that putting unit tests regarding a class file into another one ending on .test.php would render the tests much more present to other developers without tainting the class. Although it bloats the project folders, instead of the implementation files, this is my favorite so far, but I have my doubts: I would think others have come up with this already, and discarded this option for some reason (i.e. I have not seen a java project with the files Foo.java and FooTest.java within the same folder.) Maybe it's because java developers make heavier use of IDEs that allow them easier access to the tests, whereas in PHP no big editors have emerged (like eclipse for java) - many devs I know use vim/emacs or similar editors with little support for PHP development per se. What is your experience with any of these unit test placements? Do you have another convention I haven't listed here? Or am I just overrating unit test accessibility to reviewing developers?

    Read the article

  • Converting .wav (CCITT A-Law format) to .mp3 using LAME

    - by iar
    I would like to convert wav files to mp3 using the lame encoder (lame.exe). The wav files are recorded along the following specifications: Bit Rate: 64kbps Audio sample size: 8 bit Channels: 1 (mono) Audio sample rate: 8 kHz Audio format: CCITT A-Law If I try to convert such a wav file using lame, I get the following error message: Unsupported data format: 0x0006 Could anyone provide me with a command line string using lame.exe that will enable me to convert these kind of wav files?

    Read the article

  • Restore Bak File created on Windows XP pro in Windows XP Pro 64

    - by Kobojunkie
    I have a situation that I need help with. I backed up my Files on Windows XP using the System BackUp utility/wizard, and then Installed a new operating system on the machine. Now I want to restore my old files via the .bak file but it is not being recognized at all. Did I do this wrong or is there a way to still get back my old files on my new OS ? Thanks in advance!

    Read the article

  • installing bitnami stacks with virtualbox

    - by dreftymac
    Background: I cannot seem to get the vmdk files to work with VirtualBox as a way of using Bitnami stacks. The documentation says you can use VirtualBox, but there is no detail except how to use VMWare player. I know how to get .iso files working with virtual box, but not the files that Bitnami uses. Question: Anyone have experience getting this specific configuration to work?

    Read the article

  • WDS's MDT DeploymentShare and REMINST replicated with DFS-R does not read WIM from local WDS

    - by mbrownnyc
    I've read several guides on using DFS-R with WDS and MDT to replicate REMINST and DeploymentShare, and I have a particularly strange problem. On the receiving server, after configuring WDS and mounting the DeploymentShare into MDT's DeploymentWorkbench, I also performed the following: 1) in .\Control\Bootstrap.ini, changed DeployRoot to \%wdsserver%\DeploymentShare$ 2) Changed the UNC path at the root of the MDT Deployment Share in the DeploymentWorkbench to match that of the current server. 3) In Unattend.xml files located: .\Control**, modified the following value to match the current server: <cpi:offlineImage catelog://HOST/ I am able to boot and grab the LiteTouch PE image off the local WDS TFTP server, but the WIM files, the scripts, everything else is being pulled off the WDS server at the remote site (the original WDS server that was the source of the files within the DFS-R replicated folder). What do I do in order to solve this problem? I've grepped all the files below the DeploymentShare to look for instances of the hostname of the WDS server at the remote site (the source of the files), but I found none. Here are the guides I referred to: http://technet.microsoft.com/en-us/library/cc771324%28WS.10%29.aspx http://blogs.technet.com/b/askds/archive/2009/12/16/wds-and-dfsr-love-at-first-sync.aspx http://oasysadmin.com/2011/11/03/copying-moving-and-replicating-the-mdt-2010-deployment-share/

    Read the article

  • NetApp FAS 2040 LDAP Win2k8R2

    - by it_stuck
    I am trying to get my FAS2040 to action user lookups using LDAP, below is the filer configuration options: filer> options ldap ldap.ADdomain dc1.colour.domain.local ldap.base OU=Users,OU=something1,OU=something2,OU=darkside,DC=colour,DC=domain,DC=local ldap.base.group ldap.base.netgroup ldap.base.passwd ldap.enable on ldap.minimum_bind_level anonymous ldap.name domain-admin-account ldap.nssmap.attribute.gecos gecos ldap.nssmap.attribute.gidNumber gidNumber ldap.nssmap.attribute.groupname cn ldap.nssmap.attribute.homeDirectory homeDirectory ldap.nssmap.attribute.loginShell loginShell ldap.nssmap.attribute.memberNisNetgroup memberNisNetgroup ldap.nssmap.attribute.memberUid memberUid ldap.nssmap.attribute.netgroupname cn ldap.nssmap.attribute.nisNetgroupTriple nisNetgroupTriple ldap.nssmap.attribute.uid uid ldap.nssmap.attribute.uidNumber uidNumber ldap.nssmap.attribute.userPassword userPassword ldap.nssmap.objectClass.nisNetgroup nisNetgroup ldap.nssmap.objectClass.posixAccount posixAccount ldap.nssmap.objectClass.posixGroup posixGroup ldap.passwd ****** ldap.port 389 ldap.servers ldap.servers.preferred ldap.ssl.enable off ldap.timeout 20 ldap.usermap.attribute.unixaccount unixaccount ldap.usermap.attribute.windowsaccount sAMAccountName ldap.usermap.base ldap.usermap.enable on output of nsswitch.conf: hosts: files dns passwd: ldap files netgroup: ldap files group: ldap files shadow: files nis Error Message(s): [filer: auth.ldap.trace.LDAPConnection.statusMsg:info]: AUTH: TraceLDAPServer- Starting AD LDAP server address discovery for dc1.colour.domain.LOCAL. [filer: auth.ldap.trace.LDAPConnection.statusMsg:info]: AUTH: TraceLDAPServer- Found no AD LDAP server addresses using DNS site query (site). [filer: auth.ldap.trace.LDAPConnection.statusMsg:info]: AUTH: TraceLDAPServer- Found no AD LDAP server addresses using generic DNS query. Could not get passwd entry for name = <random user> the filer can ping the FQDN of dc1 the filer can ping the IP of dc1 the filer cannot ping "dc1" I'm not sure where I'm going wrong, so any pointers would be great.

    Read the article

  • Hard disk with bad clusters

    - by Dan
    I have been trying to backup some files up to DVD recently, and the burn process failed saying the CRC check failed for certain files. I then tried to browse to these files in Windows explorer and my whole machine locks up and I have to reboot. I ran check disk without the '/F /R' arguments and it told me I had bad sectors. So I re-ran it with the arguments and check disk fails during the 'Chkdsk is verifying usn journal' stage with this error: Insufficient disk space to fix the usn journal $j data stream The hard disk is a 300GB Partition on a 400GB Disk, and there is 160GBs of free space on the partition. My os (Windows 7) is installed on the other partition and is running fine. Any idea how I fix this? or repair it enough to copy my files off it?

    Read the article

  • How do I upgrade django on ubuntu 9.04?

    - by Lorin Hochstein
    I've got Django 1.0.2 installed on Ubuntu 9.04. I'd like to upgrade Django, because I have an app that needs Django 1.1 or greater. I tried using pip to do the upgrade, but got the following: $ sudo pip install Django==1.1 Downloading/unpacking Django==1.1 Downloading Django-1.1.tar.gz (5.6Mb): 5.6Mb downloaded Running setup.py egg_info for package Django Installing collected packages: Django Found existing installation: Django 1.0.2-final Not uninstalling Django at /var/lib/python-support/python2.6, outside environment /usr Running setup.py install for Django changing mode of build/scripts-2.6/django-admin.py from 644 to 755 changing mode of /usr/local/bin/django-admin.py to 755 Successfully installed Django It seems like it worked, but it refuses to remove the original Django 1.02, and sure enough: $ pip freeze | grep -i django Django==1.0.2-final django-debug-toolbar==0.8.3 django-sphinx==2.2.3 $ /usr/local/bin/django-admin.py --version 1.0.2 final The problem, apparently, is that pip won't uninstall files outside of /usr. I'd like to remove the existing Django files manually, but I have no idea how to do that, because I'm unfamiliar with how Python packages are laid out in Ubuntu. It looks pretty complicated. The site-packages directory is: $ python -c "from distutils.sysconfig import get_python_lib; print get_python_lib()" /usr/lib/python2.6/dist-packages However, that's not where the django files live: $ ls -ld /usr/lib/python2.6/dist-packages/[Dd]jango* ls: cannot access /usr/lib/python2.6/dist-packages/[Dd]jango*: No such file or directory There's a /var/lib/python-support/python2.6/django directory, and the __init__.py file in that directory points to /usr/share/python-support/python-django/django/__init__.py. Clearly, pip is able to figure out where the files live. Is there any way to retrieve the list of files associated with the django package so I can just delete them manually?

    Read the article

  • Sage 50 Accounts 2010 wont run on windows 7

    - by admintech
    I have sucessfuly installed Sage 50 Accounts 2010 onto my 32 bit Windows 7 machine, yet whenever i try to run it i encounter - Log Name: Application Source: Application Error Date: 24/05/2010 17:14:13 Event ID: 1000 Task Category: (100) Level: Error Keywords: Classic User: N/A Computer: LukeThomas-PC.domain.co.uk Description: Faulting application name: Sage.SBD.Platform.Installation.SoftwareUpdates.UI.exe, version: 2.0.0.91, time stamp: 0x4a8c22fe Faulting module name: igdumd32.dll, version: 8.15.10.1872, time stamp: 0x4a848a05 Exception code: 0xc0000409 Fault offset: 0x00012f96 Faulting process id: 0x1778 Faulting application start time: 0x01cafb5c2493b609 Faulting application path: C:\Program Files\Common Files\Sage SBD\Sage.SBD.Platform.Installation.SoftwareUpdates.UI.exe Faulting module path: C:\Windows\system32\igdumd32.dll Report Id: 63e2246b-674f-11df-96ba-002564c97988 Event Xml: 1000 2 100 0x80000000000000 5062 Application LukeThomas-PC.domain.co.uk Sage.SBD.Platform.Installation.SoftwareUpdates.UI.exe 2.0.0.91 4a8c22fe igdumd32.dll 8.15.10.1872 4a848a05 c0000409 00012f96 1778 01cafb5c2493b609 C:\Program Files\Common Files\Sage SBD\Sage.SBD.Platform.Installation.SoftwareUpdates.UI.exe C:\Windows\system32\igdumd32.dll 63e2246b-674f-11df-96ba-002564c97988 Any help would be appreciated as i cant find anything about this error

    Read the article

  • Problem with icacls on Windows 2003: "Acl length is incorrect"

    - by Andrew J. Brehm
    I am confused by the output of icacls on Windows 2003. Everything appears to work on Windows 2008. I am trying to change permissions on a directory: icacls . /grant mydomain\someuser:(OI)(CI)(F) This results in the following error: .: Acl length is incorrect. .: An internal error occurred. Successfully processed 0 files; Failed processing 1 files The same command used on a file named "file" works: icacls file /grant mydomain\someuser:(OI)(CI)(F) Result is: processed file: file Successfully processed 1 files; Failed processing 0 files What's going on?

    Read the article

  • Can't copy file with green filename, access denied

    - by Swinders
    Using Windows Explorer under XP I see some of my files have green filenames. When I try and copy one of these files I get an error reporting Access denied. The My pictures folder also appears with green text and I have a large number of photos from family holidays I don't want to loose. I need to backup these files as I'm changing laptop soon. We use SafeBoot to encryt our hard drives but I don't think this is the problem as it allows me to copy other files to removable media with no problems. Has anybody come across this before and how can I resolve this?

    Read the article

  • What file format/database format does Picasa use?

    - by Raymond
    I am trying to figure out what file format the .db file and .pmp files are. I tried using db_dump (Berkeley DB) for the .db files, but it seems that they are not Berkeley DB, or of an older version. I have no idea what the .PMP files are. Directory of C:\Users\me\AppData\Local\Google\Picasa2\db3 6/09/2010 08:07 PM 303,748 imagedata_uid64.pmp 1/18/2010 10:34 PM 4,885 imagedata_unification_lhlist.pmp 6/09/2010 10:55 PM 155,752 imagedata_width.pmp 6/09/2010 10:55 PM 1,286,346,614 previews_0.db 6/10/2010 10:06 AM 467,168 previews_index.db Any help appreciated.

    Read the article

  • Mac server default file permissions

    - by Bobby Jack
    How do I change the default file permissions for files created on a Mac server? In case it's relevant, this is a Mac Mini running Mac OS 10.6.7. It's currently used mainly as a file server, and there are several users who need to share files. These files need to be writable by all, rather than the default which is writable only by the owner. I've been trying to do something with umask and a startup script, but I'm not sure there's a startup script that will apply to connections via Finder. I also need this to apply to files created on a client (also Macs) and copied onto the server.

    Read the article

  • SQL Server 2008 log issue

    - by George2
    Hello everyone, I am using SQL Server 2008 Enterprise. Under logs folder, in my machine it is C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Log, there are three kinds of files, •ERRORLOG, ERRORLOG.1, ERRORLOG.2 ... ERRORLOG.6; •FDLAUNCHERRORLOG, FDLAUNCHERRORLOG.1, FDLAUNCHERRORLOG.2, ...FDLAUNCHERRORLOG.6; •log_207.trc, log_208.trc, ... My question is what are the differnet function of such log files? And why there are files ends with .1, .2, etc? thanks in advance, George

    Read the article

  • sudo apt-get install mysql-server fails

    - by danwoods
    Hi all, I'm coming from a fresh install of Ubuntu server 9.10 and trying to install mysql-server by using 'sudo apt-get mysql-server' I get the following errors: dan@dev:~$ sudo apt-get install mysql-server [sudo] password for dan: Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: libdbd-mysql-perl libdbi-perl libhtml-template-perl libnet-daemon-perl libplrpc-perl mysql-client-5.1 mysql-server-5.1 Suggested packages: dbishell libipc-sharedcache-perl tinyca The following NEW packages will be installed: libdbd-mysql-perl libdbi-perl libhtml-template-perl libnet-daemon-perl libplrpc-perl mysql-client-5.1 mysql-server mysql-server-5.1 0 upgraded, 8 newly installed, 0 to remove and 0 not upgraded. Need to get 16.5MB of archives. After this operation, 39.0MB of additional disk space will be used. Do you want to continue [Y/n]? y Get:1 http://us.archive.ubuntu.com karmic/main libnet-daemon-perl 0.43-1 [46.9kB] Get:2 http://us.archive.ubuntu.com karmic/main libplrpc-perl 0.2020-2 [36.0kB] Get:3 http://us.archive.ubuntu.com karmic/main libdbi-perl 1.609-1 [800kB] Get:4 http://us.archive.ubuntu.com karmic/main libdbd-mysql-perl 4.011-1ubuntu1 [136kB] Get:5 http://us.archive.ubuntu.com karmic-updates/main mysql-client-5.1 5.1.37- 1ubuntu5.1 [8,202kB] Get:6 http://us.archive.ubuntu.com karmic-updates/main mysql-server-5.1 5.1.37-1ubuntu5.1 [7,186kB] Get:7 http://us.archive.ubuntu.com karmic/main libhtml-template-perl 2.9-1 [65.8kB] Get:8 http://us.archive.ubuntu.com karmic-updates/main mysql-server 5.1.37-1ubuntu5.1 [64.3kB] Fetched 16.5MB in 1min 34s (175kB/s) Preconfiguring packages ... Selecting previously deselected package libnet-daemon-perl. (Reading database ... 123083 files and directories currently installed.) Unpacking libnet-daemon-perl (from .../libnet-daemon-perl_0.43-1_all.deb) ... Selecting previously deselected package libplrpc-perl. Unpacking libplrpc-perl (from .../libplrpc-perl_0.2020-2_all.deb) ... Selecting previously deselected package libdbi-perl. Unpacking libdbi-perl (from .../libdbi-perl_1.609-1_i386.deb) ... Selecting previously deselected package libdbd-mysql-perl. Unpacking libdbd-mysql-perl (from .../libdbd-mysql-perl_4.011-1ubuntu1_i386.deb) ... Selecting previously deselected package mysql-client-5.1. Unpacking mysql-client-5.1 (from .../mysql-client-5.1_5.1.37-1ubuntu5.1_i386.deb) ... Selecting previously deselected package mysql-server-5.1. Unpacking mysql-server-5.1 (from .../mysql-server-5.1_5.1.37-1ubuntu5.1_i386.deb) ... Selecting previously deselected package libhtml-template-perl. Unpacking libhtml-template-perl (from .../libhtml-template-perl_2.9-1_all.deb) ... Selecting previously deselected package mysql-server. Unpacking mysql-server (from .../mysql-server_5.1.37-1ubuntu5.1_all.deb) ... Processing triggers for man-db ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Setting up libnet-daemon-perl (0.43-1) ... Setting up libplrpc-perl (0.2020-2) ... Setting up libdbi-perl (1.609-1) ... Setting up libdbd-mysql-perl (4.011-1ubuntu1) ... Setting up mysql-client-5.1 (5.1.37-1ubuntu5.1) ... Setting up mysql-server-5.1 (5.1.37-1ubuntu5.1) ... * Stopping MySQL database server mysqld [ OK ] * Starting MySQL database server mysqld [fail] invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.1 (--configure): subprocess installed post-installation script returned error exit status 1 Setting up libhtml-template-perl (2.9-1) ... dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.1; however: Package mysql-server-5.1 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: mysql-server-5.1 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) What am I missing? [update] mysqld returns: dan@dev:~$ sudo mysqld [sudo] password for dan: 100220 12:18:17 [Note] Plugin 'FEDERATED' is disabled. InnoDB: Unable to lock ./ibdata1, error: 11 InnoDB: Check that you do not already have another mysqld process InnoDB: using the same InnoDB data or log files. 100220 12:18:17 InnoDB: Retrying to lock the first data file InnoDB: Unable to lock ./ibdata1, error: 11 InnoDB: Check that you do not already have another mysqld process This goes on for a while... InnoDB: Unable to lock ./ibdata1, error: 11 InnoDB: Check that you do not already have another mysqld process InnoDB: using the same InnoDB data or log files. ^[[BInnoDB: Unable to lock ./ibdata1, error: 11 InnoDB: Check that you do not already have another mysqld process InnoDB: using the same InnoDB data or log files. 100220 12:19:57 InnoDB: Unable to open the first data file InnoDB: Error in opening ./ibdata1 100220 12:19:57 InnoDB: Operating system error number 11 in a file operation. InnoDB: Error number 11 means 'Resource temporarily unavailable'. InnoDB: Some operating system error numbers are described at InnoDB: http://dev.mysql.com/doc/refman/5.1/en/operating-system-error-codes.html InnoDB: Could not open or create data files. InnoDB: If you tried to add new data files, and it failed here, InnoDB: you should now edit innodb_data_file_path in my.cnf back InnoDB: to what it was, and remove the new ibdata files InnoDB created InnoDB: in this failed attempt. InnoDB only wrote those files full of InnoDB: zeros, but did not yet use them in any way. But be careful: do not InnoDB: remove old data files which contain your precious data! 100220 12:19:57 [ERROR] Plugin 'InnoDB' init function returned error. 100220 12:19:57 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 100220 12:19:57 [ERROR] Can't start server: Bind on TCP/IP port: Address already in use 100220 12:19:57 [ERROR] Do you already have another mysqld server running on port: 3306 ? 100220 12:19:57 [ERROR] Aborting 100220 12:19:57 [Warning] Forcing shutdown of 1 plugins 100220 12:19:57 [Note] mysqld: Shutdown complete How can I check what process is using port: 3306? [Update]: sudo netstat -anp | grep LISTEN returns dan@dev:~$ sudo netstat -anp | grep LISTEN [sudo] password for dan: tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN 1372/master tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 4391/mysqld tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 1409/cupsd tcp6 0 0 ::1:631 :::* LISTEN 1409/cupsd [More Updates]: I can log into mysql if that makes a difference

    Read the article

  • Trace directory not defined error in MS Dynamics CRM 4.0

    - by dmcollie
    I'm getting the following event log entry when I turn on tracing in CRM using the Crm Diagnostics Tool. Any ideas why it's not picking up the correct directory to place the files? CrmTrace encountered a failure creating or opening the file named C:\Program Files\Microsoft Dynamics CRM\Trace\CRM-SERVER-CrmAsyncService-bin-20091106-1.log. (Reporting Process:CrmAsyncService, AppDomain:C:\Program Files\Microsoft Dynamics CRM\Server\bin) TIA.

    Read the article

  • Drop outs when accessing share by DFS name.

    - by Stephen Woolhead
    I have a strange problem, aren't they all! I have a DFS root \domain\files\vms, it has a single target on a different server than the namespace. I can copy a test file set from the target directly via \server\vms$\testfiles and all is well, the files copy fine. I have repeated these tests many times. If I try and copy the files from the dfs root I get big pauses in the network traffic, about 50 seconds every couple of minutes, all the traffic just stops for the copy. If I start another copy between the same two machines during this pause, it starts copying fine, so I know it's not an issue with the disks on the server. Every once in a while the copy will fail, no errors, the progress bar will just zip all the way to 100% and the copy dialog will close. Checking the target folder show that the copy is incomplete. I've moved the LUN to another server and had the same problem. The servers are all 2008 R2, the clients are Vista x64, Windows7 x64 and 2008 R2, all have the same problem. Anyone got any ideas? Cheers, Stephen More Information: I've been running a NetMon trace on the connection when the file copy fails and what seems to be standing out is that when opening a file that the copy completes on the SMB command looks like this: SMB2: C CREATE (0x5), Name=Training\PDC2008\BB34 Live Services Notifications, Awareness, and Communications.wmv@#422082, Context=DHnQ, Context=MxAc, Context=QFid, Context=RqLs, Mid = 245376 SMB2: R CREATE (0x5), Context=MxAc, Context=RqLs, Context=DHnQ, Context=QFid, FID=0xFFFFFFFF00000015, Mid = 245376 But for the last file when the copy dialog closes looks like this: SMB2: C CREATE (0x5), Name=gt\files\Media\Training\PDC2008\BB36 FAST Building Search-Driven Portals with Microsoft Office SharePoint Server 2007 and Microsoft Silverlight.wmv@#859374, Context=DHnQ, Context=MxAc, Context=QFid, Context=RqLs, Mid = 77 SMB2: R , Mid = 77 - NT Status: System - Error, Code = (58) STATUS_OBJECT_PATH_NOT_FOUND The main difference seems to be in the name, one is relative to the open file share, the other has gained the gt\files\media prefix which is the name of the DFS target. These failures are always preceded by logoff and back on of the SMB target. Might have to bump this one to PSS.

    Read the article

  • NetApp FAS 2040 LDAP Win2k8R2

    - by it_stuck
    I am trying to get my FAS2040 to action user lookups using LDAP, below is the filer configuration options: filer> options ldap ldap.ADdomain dc1.colour.domain.local ldap.base OU=Users,OU=something1,OU=something2,OU=darkside,DC=colour,DC=domain,DC=local ldap.base.group ldap.base.netgroup ldap.base.passwd ldap.enable on ldap.minimum_bind_level anonymous ldap.name domain-admin-account ldap.nssmap.attribute.gecos gecos ldap.nssmap.attribute.gidNumber gidNumber ldap.nssmap.attribute.groupname cn ldap.nssmap.attribute.homeDirectory homeDirectory ldap.nssmap.attribute.loginShell loginShell ldap.nssmap.attribute.memberNisNetgroup memberNisNetgroup ldap.nssmap.attribute.memberUid memberUid ldap.nssmap.attribute.netgroupname cn ldap.nssmap.attribute.nisNetgroupTriple nisNetgroupTriple ldap.nssmap.attribute.uid uid ldap.nssmap.attribute.uidNumber uidNumber ldap.nssmap.attribute.userPassword userPassword ldap.nssmap.objectClass.nisNetgroup nisNetgroup ldap.nssmap.objectClass.posixAccount posixAccount ldap.nssmap.objectClass.posixGroup posixGroup ldap.passwd ****** ldap.port 389 ldap.servers ldap.servers.preferred ldap.ssl.enable off ldap.timeout 20 ldap.usermap.attribute.unixaccount unixaccount ldap.usermap.attribute.windowsaccount sAMAccountName ldap.usermap.base ldap.usermap.enable on output of nsswitch.conf: hosts: files dns passwd: ldap files netgroup: ldap files group: ldap files shadow: files nis Error Message(s): [filer: auth.ldap.trace.LDAPConnection.statusMsg:info]: AUTH: TraceLDAPServer- Starting AD LDAP server address discovery for dc1.colour.domain.LOCAL. [filer: auth.ldap.trace.LDAPConnection.statusMsg:info]: AUTH: TraceLDAPServer- Found no AD LDAP server addresses using DNS site query (site). [filer: auth.ldap.trace.LDAPConnection.statusMsg:info]: AUTH: TraceLDAPServer- Found no AD LDAP server addresses using generic DNS query. Could not get passwd entry for name = <random user> the filer can ping the FQDN of dc1 the filer can ping the IP of dc1 the filer cannot ping "dc1" I'm not sure where I'm going wrong, so any pointers would be great.

    Read the article

< Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >