Search Results

Search found 51448 results on 2058 pages for 'log files'.

Page 84/2058 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • All files erased after installing ubuntu 11.04 alpha 3

    - by wifi
    Yeah I know I should have backed up my files before proceeding, I completely forgot. Well, the thing is that I had a dual-boot system with Windows 7 and Ubuntu 10.10. Yesterday, I installed Ubuntu 11.04 alpha 3 (through live USB). I chose that 11.04 would install over 10.10 on the installation wizard, where I have no important files. However, it overwrote Windows too, and its data. Is there some way to recover it? Thanks!

    Read the article

  • reading parameters and files on browser, looking how to execute on server

    - by jbcolmenares
    I have a site done in Rails, which uses javascript to load files and generate forms for the user to input certain information. Those files and parameters are then to be used in a fortran code on the server. When the UI was on the server (using Qt), I would create a parameters file and execute the fortran code using threads so I wouldn't block the computer. Now that is web-based, I need to make the server and browser talk. What's the procedure for that? where should I start looking? I'm already using rails + javascript. I need that extra tool to do the talking, and no idea where to start.

    Read the article

  • iPhone/iPad fatal error in C++ code produces no output in the log

    - by morgancodes
    I'm trying to move away from Objective-C to C++ for audio in my iPad programming, due to the a few reports I've heard of Objective-C selectors sometimes causing audio glitches. So I'm starting to use pure C++ files. When a fatal error happens in one of the C++ files, I get no output from the log. The app just crashes. For example, if I do this in my C++ file: env = new ADSR(); cout << "setting env to null\n"; env = NULL; env->setSustainLevel(1); cout << "called function on non-initialized env\n"; I get the following output: setting env to null After that, there's a method called on NULL, which apparently kills the app, but absolutely nothing to that effect is reported. What do I need to do to have useful information logged when there's an error in my C++ code?

    Read the article

  • Looking for best approach to create new projects for enviroment specifics files

    - by Ness
    ClearCase Question... Overview of requirements: There are 3 diff environments (DEV, TEST and PROD) which have a folder called 'common' that users across all envs. There are multiple servers in those 3 envs and we want to store their server environment specific configuration files in Clearcase. The executables files are different for each environment. Thus there will not be cross delivery require between dev/test/prod. Any thoughts on how we can approach this? Is keeping it simplest is the best approach here? One component to one vobs as (DEV_Serv1, TEST_Serv1, PROD_Serv1, Dev_Serv2, Test_Serv2 and etc)? OR Have multiple components VOB? One other thing is developers here like to use snapshots views.

    Read the article

  • Writing files to an Airport Extreme using afp

    - by Bill Oldroyd
    Using Nautilus I can connect from Ubuntu 12.04 (64-bit) to my Apple Airport Extreme using user & password without a problem. I can read, browse folders and delete files. However I cannot write files, the file is created, but the contents of the file are not transferred. The transfer fails with the error message "kFPMiscErr" which I think means that "authentication has already been established" ?. I have tried the command line tools for access using AFP but these do not work either. Is there a solution to this problem ?

    Read the article

  • Will uploading our .docx files on scribd and embedding the files on our website affect search engine rankings?

    - by user1439968
    We have prepared notes for university students which are on .docx format. And we want it to put on our website for viewing. We tried one option. Uploading the files on scribd and embedding it on our website for viewing on scribd viewer. Will making documents available on srcibd viewer on our website affect search engine rankings ? Will search engines treat it as duplicate content as those are already uploaded on scribd and we are embedding it on our website ? On scribd we have set the uploaded documents as 'private' though. And if it affects, can you suggest any suitable way to make .docx files to be viewed on our website that doesn't affect search engine rankings ?

    Read the article

  • Python, web log data mining for frequent patterns

    - by descent
    Hello! I need to develop a tool for web log data mining. Having many sequences of urls, requested in a particular user session (retrieved from web-application logs), I need to figure out the patterns of usage and groups (clusters) of users of the website. I am new to Data Mining, and now examining Google a lot. Found some useful info, i.e. querying Frequent Pattern Mining in Web Log Data seems to point to almost exactly similar studies. So my questions are: Are there any python-based tools that do what I need or at least smth similar? Can Orange toolkit be of any help? Can reading the book Programming Collective Intelligence be of any help? What to Google for, what to read, which relatively simple algorithms to use best? I am very limited in time (to around a week), so any help would be extremely precious. What I need is to point me into the right direction and the advice of how to accomplish the task in the shortest time. Thanks in advance!

    Read the article

  • grep from a log file to get count

    - by subodh1989
    I have to get certain count from files. The grep statement i am using is like this : counter_pstn=0 completed_count_pstn=0 rec=0 for rec in `(grep "merged" update_completed*.log | awk '{print $1}' | sed 's/ //g' | cut -d':' -f2)` do if [ $counter_pstn -eq 0 ] then completed_count_pstn=$rec else completed_count_pstn=$(($completed_count_pstn+$rec)) fi counter_pstn=$(($counter_pstn+1)) done echo "Completed Orders PSTN Primary " $completed_count_pstn But the log file contains data in this format : 2500 rows merged. 2500 rows merged. 2500 rows merged. 2500 rows merged.2500 rows merged. 2500 rows merged. 2500 rows merged. As a result , it is missing out the count of one merge(eg on line 4 of output).How do i modify the grep or use another function to get the count. NOTE that the 2500 number maybe for different logs. So we have to use "rows merged" pattern to get the count. i have tried -o ,-w grep options,but it is not working. Expected output from above data: 17500 Actual output showing : 15000

    Read the article

  • How to maintain different settings files in TFS

    - by aggietech
    I'm currently working on integrating the TFS source control system at my work ... I run into one small problem ... I need different version of web.config (among other config files) for different branches (due to the environment that we're releasing the web application to). (for example - i don't want to merge the web.config file all the time even though there are differences ...) Is there a good way to keep track of that (instead of manually diff-ing the files)? thanks!

    Read the article

  • Option to save project files for later use in Dreamweaver?

    - by Lup T. Ma
    Does anyone know of an extension or other way to allow me to save a set of files in a project for later use? Example: - Working on site A, opened html files A1-A15 (15 files) Received a request to work on site B, new files (number unimportant). I would like DW to remember that I was working on files A1-A15. Close the site A files and focus on just files from site B. Complete site B work. Reopen site A files altogether. Suggestions are greatly appreciated. Thanks!

    Read the article

  • MySQL tmpdir on /dev/shm with SELinux

    - by smorfnip
    On RHEL5, I have a small MySQL database that has to write temp files. To speed up this process, I would like to move the temporary directory to /dev/shm by putting the following line into my.cnf: tmpdir=/dev/shm/mysqltmp I can create /dev/shm/mysqltmp just fine and do chown mysql:mysql /dev/shm/mysqltmp chcon --reference /tmp/ /dev/shm/mysqltmp I've tried to make SELinux happy by applying the same settings that are in effect for /tmp/ (and /var/tmp/), which is presumably where MySQL is writing its tmp files if tmpdir is undefined. The problem is that SELinux complains about MySQL having access to that directory. I get the following in /var/log/messages: SELinux is preventing mysqld (mysqld_t) "getattr" to /dev/shm (tmpfs_t). SELinux is a hard mistress. Details: Source Context root:system_r:mysqld_t Target Context system_u:object_r:tmpfs_t Target Objects /dev/shm [ dir ] Source mysqld Source Path /usr/libexec/mysqld Port <Unknown> Host db.example.com Source RPM Packages mysql-server-5.0.77-3.el5 Target RPM Packages Policy RPM selinux-policy-2.4.6-255.el5_4.1 Selinux Enabled True Policy Type targeted MLS Enabled True Enforcing Mode Enforcing Plugin Name catchall_file Host Name db.example.com Platform Linux db.example.com 2.6.18-164.2.1.el5 #1 SMP Mon Sep 21 04:37:42 EDT 2009 x86_64 x86_64 Alert Count 46 First Seen Wed Nov 4 14:23:48 2009 Last Seen Thu Nov 5 09:46:00 2009 Local ID e746d880-18f6-43c1-b522-a8c0508a1775 ls -lZ /dev/shm shows drwxrwxr-x mysql mysql system_u:object_r:tmp_t mysqltmp and permissions for /dev/shm itself are drwxrwxrwt root root system_u:object_r:tmpfs_t shm I've also tried chcon -R -t mysqld_t /dev/shm/mysqltmp and setting the group on /dev/shm to mysql with no better results. Shouldn't it be enough to tell SELinux, hey, this is a temp directory just like MySQL was using before? Short of turning off SELinux, how do I make this work? Do I need to edit SELinux policy files?

    Read the article

  • Can't resolve offline file conflicts

    - by Bryan
    We use roaming profiles on our Server 2008 R2 domain, with folder redirection for 'desktop', 'my documents' and 'application data'. But as our network is split across two sites, we have one file server at each site, which are configured to use domain based DFS namespaces and DFS replication to keep things in sync. The DFS path for the replication folder is as follows: \\domain\folderredirection$\<username>\<redirected-folder-name> The real paths are \\site-1-server\folderredirection$\<username>\<redirected-folder-name> and \\site-2-server\folderredirection$\<username>\<redirected-folder-name> As our users all switch between sites (sometimes several time per day), our folder redirection policy has to redirect to the DFS roots rather than hardcoded to a specific server. Both DFS and DFS-R have been proven to be working perfectly. On our laptops, we use offline files for the redirected folders, and this also works fine, however the problem is as follows: When conflicts occur in offline files, it is impossible to resolve the conflicts. I'm given the usual conflict resolution options (i.e. 'Ignore', 'Keep Both', 'Keep network' and 'Keep local'), however, not one of these options will resolve any conflict, yet no error is produced. We only use offline files on laptops, which have either Windows XP Professional or Windows 7 Professional installed. The problem is not specific to any one laptop, it affects every laptop and every conflicting file in exactly the same way. I would have thought the set up we have is common for companies that have multiple sites, so I'm hoping someone will have seen this before?

    Read the article

  • Seeking web-based FTP client for very large file upload

    - by Paul M. Nguyen
    I have looked around for these for some time... the limits imposed by the web server and/or the dynamic programming environment (e.g. PHP) are far too restrictive for the application I'm working on. We need to be able to move large graphics and video files to and from clients (ranging from tens of MB to a few GB in a single file). Plain FTP with a proper desktop client will do the trick, and we're hosting this in Amazon EC2 with EBS. User management will be done from the office via webmin. Users are chroot-jailed into their home dir by proftpd. net2ftp will work for many clients, but we often need to move single files that approach 1GB or exceed 2-3GB which is way out of the range of any http-based uploader. So we turn to Java or Flash - can they do it? From within the web browser establish an FTP connection and grab a huge file? There are licensed applets and such out there, but none seem convincing. Again, I'm looking for some code that can speak FTP and read (& write?) the local disk, that is delivered in a web browser, and can move single files of 2GB+. The reason for having a web-based interface to FTP is to skip the software installation step for our clients. I will consider proper desktop client software as long as it's "portable" and at least Win+Mac and can be easily configured by lay-man users in a hurry.

    Read the article

  • Nginx redirect all request that does not match a file to a php file

    - by cyrbil
    I'm trying to get all request to: http://mydomain.com/downloads/* redirect to http://mydomain.com/downloads/index.php except if the requested file exist in /downloads/ ex: http://mydomain.com/downloads = /downloads/index.php http://mydomain.com/downloads/unknowfile = /downloads/index.php http://mydomain.com/downloads/existingfile = /downloads/existingfile My current problem is I have either the redirection to php working but static files not served or the opposite. Here is my current vhost conf: (which redirect fine but static files are send to php and fail) server { listen 80; ## listen for ipv4; this line is default and implied server_name domain.com; root /data/www; index index.php index.html; location / { try_files $uri $uri/ /index.html; } error_page 404 /404.html; # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/www; } location ^~ /downloads { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_index index.php; include fastcgi_params; try_files $uri @downloads; } location @downloads { rewrite ^ /downloads/index.php; } # pass the PHP scripts to FastCGI server # location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } } Precision: static files are symlinks created by /downloads/index.php Thank you for your help.

    Read the article

  • How to concatenate all commit messages from subversion into one text file with no metadata?

    - by user144182
    I would like to take all the commit messages in my subversion log and just concatenate them into one text file. Each commit message has this format: - r1 message - r1 message - r1 message What I would like is something like: - r1 message - r1 message - r2 message - r2 message - r3 message [...] - r1000 message Update I thought the above was clear, but what I don't want in the log is this type of info: r2130 | user| 2010-03-19 10:36:13 -0400 (Fri, 19 Mar 2010) | 1 line No meta data, I simply want the commit messages.

    Read the article

  • How to hide files in Apache 2.2 WebDAV Directory listings

    - by mdornsf
    I use Apache 2.2 as WebDAV file server to a bunch of Mac and MS Windows clients. Unfortunately both clutter the filesystem with files like .DS_Store or thumbs.db. Since hte files distract my users i want to hide them from directory listings. Unfortunately the standard way of hiding files in Apache (via IndexIgnore) seems not to work via WebDAV. Is there any other way to hide files?

    Read the article

  • How can access files on shared drive from Windows 2008 server configured with SFTP

    - by communicator
    I have installed OpenSSH on my windows 2008 server by following the user guide here . Now I have some files on windows network share with UNC path as \\corp\test\testdata I want map this file system on network share to my windows 2008 server which is configred with SFTP so that I can access these files from my Java Program by doing SFTP to windows 2008 server.Is there anyway I can map the network share to C or other drive in server so that all the files on the share will be available as local files on the server?

    Read the article

  • Emacs open files from a filename list

    - by crasic
    I have a largish tex project that is separated into several tex files. Everytime I want to work on it I open emacs and manually C-x C-f all the files that I want to work on. I was wondering if there is a way to open files (from command line) from a file containing a list of filenames, something like filelist.txt: file1.tex file2.tex file3.tex then do cat files | emacs -nw except that emacs doesn't support the command used as it doesn't like that stdin is reassigned. any ideas?

    Read the article

  • View/Find all compressed files on the server?

    - by Volodymyr
    I need to find all compressed files/folders regardless of file format on a Windows Server 2003 machine. Search options do not provide this capability. Is there a way to list/view all compressed files? Perhaps, this can be done by PowerShell using file/folder attributes and put into a txt file with file location. UPD: Under compressed files/folders - I mean files which appear in blue color in Explorer after changing file/folder attribute.

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >