Search Results

Search found 7902 results on 317 pages for 'structure'.

Page 281/317 | < Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >

  • Proper Imaging Procedures to Restore and Deploy Image with Separate System Reserved Partition

    - by alharaka
    UPDATE: As per my experience here, no one responded. If I do not hear back from TechNet forum members about it, I will post a bounty here, if it makes a difference. I have banged my head against a wall for what seems like all week. I am going to explain my simple procedure, and how none of it, absolutely none, seems to work afterword despite few alternatives and everyone on the internet telling assuming this is how to do it. Diskpart Commands to Create FS Structure REM Select the disk targeted for deployment. REM REM NOTE: Usually disk 0, but drive failure can make it external USB REM media. This will erase the drive regardless! select disk 0 REM Remove previous formatting. clean REM Create System Reserved partition bootloader and files. create partition primary size=100 REM Format the volume format fs=ntfs label="System Reserved" quick override noerr REM Assign the System Reserved partition the D: mount for now assign letter=C REM The main system partition, size not specified to occupy whole drive. create partition primary REM Format the volume format fs=ntfs quick override noerr REM Assign the OS partition the D: mount for now assign letter=D REM Make this the active/bootable partition. sel disk 0 sel partition 1 active REM Close out the diskpart session. exit Now, I thought this was madness, but it turns out the System Reserved partition and standard "System Partition" (C:, commonly both the boot and system volumes where you find the Windows directory AND the bootmgr/ntldr hardware files, this is where Windows 7 diverges) as mounted in the Windows PE session where I run these commands do not matter. See reference here. Since this needs to be BitLocker-ready, enter this crappy System Reserved partition that is separate 100MB of awesome that goes before the regular boot volume. I do this, then I proceed to the next step. Deploy System Reserved and Normal System Images REM C is still the "System Reserved Partition", and the image is just like it sounds. imagex /apply G:\images\systemreserved.wim 1 C: REM D is now what will be the C: system partition on reboot, supposedly. imagex /apply G:\images\testimage.wim 1 D: Reboot the system Now, the images I just captured should look good. This is not even sysprepped, but reapplying the same fscking image I prepared on the same reference workstation hours before. Problem is I get 0xc000000e could not detect the accessible boot device \Windows\system32\winload.exe or different kinds of nonsense revolving around being able to find the boot volume with all the right files. I try different variations of things, now none of them work. I tried repairs with bcdboot, with a fresh System Reserved partition or not, bootrec, and maually editing the damn BCD store with bcdedit. I tried finalizing the above process with and without bootsect /nt60 C: /force. I need to wrap up and automate this procedure. What am I doing wrong that does not make the image happy, but really just miserable.

    Read the article

  • High I/O latency with software RAID, LUKS encrypted and LVM partitioned KVM setup

    - by aef
    I found out a performance problems with a Mumble server, which I described in a previous question are caused by an I/O latency problem of unknown origin. As I have no idea what is causing this and how to further debug it, I'm asking for your ideas on the topic. I'm running a Hetzner EX4S root server as KVM hypervisor. The server is running Debian Wheezy Beta 4 and KVM virtualisation is utilized through LibVirt. The server has two different 3TB hard drives as one of the hard drives was replaced after S.M.A.R.T. errors were reported. The first hard disk is a Seagate Barracuda XT ST33000651AS (512 bytes logical, 4096 bytes physical sector size), the other one a Seagate Barracuda 7200.14 (AF) ST3000DM001-9YN166 (512 bytes logical and physical sector size). There are two Linux software RAID1 devices. One for the unencrypted boot partition and one as container for the encrypted rest, using both hard drives. Inside the latter RAID device lies an AES encrypted LUKS container. Inside the LUKS container there is a LVM physical volume. The hypervisor's VFS is split on three logical volumes on the described LVM physical volume: one for /, one for /home and one for swap. Here is a diagram of the block device configuration stack: sda (Physical HDD) - md0 (RAID1) - md1 (RAID1) sdb (Physical HDD) - md0 (RAID1) - md1 (RAID1) md0 (Boot RAID) - ext4 (/boot) md1 (Data RAID) - LUKS container - LVM Physical volume - LVM volume hypervisor-root - LVM volume hypervisor-home - LVM volume hypervisor-swap - … (Virtual machine volumes) The guest systems (virtual machines) are mostly running Debian Wheezy Beta 4 too. We have one additional Ubuntu Precise instance. They get their block devices from the LVM physical volume, too. The volumes are accessed through Virtio drivers in native writethrough mode. The IO scheduler (elevator) on both the hypervisor and the guest system is set to deadline instead of the default cfs as that happened to be the most performant setup according to our bonnie++ test series. The I/O latency problem is experienced not only inside the guest systems but is also affecting services running on the hypervisor system itself. The setup seems complex, but I'm sure that not the basic structure causes the latency problems, as my previous server ran four years with almost the same basic setup, without any of the performance problems. On the old setup the following things were different: Debian Lenny was the OS for both hypervisor and almost all guests Xen software virtualisation (therefore no Virtio, also) no LibVirt management Different hard drives, each 1.5TB in size (one of them was a Seagate Barracuda 7200.11 ST31500341AS, the other one I can't tell anymore) We had no IPv6 connectivity Neither in the hypervisor nor in guests we had noticable I/O latency problems According the the datasheets, the current hard drives and the one of the old machine have an average latency of 4.12ms.

    Read the article

  • Why isn't this rewrite rule (nginx) applied? (trying to setup Wordpress multisite)

    - by Brian Park
    Hi, I'm trying to setup Wordpress multisite (subfolder structure) with nginx, but having a problem with this rewrite rule. Below is the Apache's .htaccess, which I have to translate into nginx configuration. RewriteEngine On RewriteBase /blogs/ RewriteRule ^index\.php$ - [L] # uploaded files RewriteRule ^([_0-9a-zA-Z-]+/)?files/(.+) wp-includes/ms-files.php?file=$2 [L] # add a trailing slash to /wp-admin RewriteRule ^([_0-9a-zA-Z-]+/)?wp-admin$ $1wp-admin/ [R=301,L] RewriteCond %{REQUEST_FILENAME} -f [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^ - [L] RewriteRule ^([_0-9a-zA-Z-]+/)?(wp-(content|admin|includes).*) $2 [L] RewriteRule ^([_0-9a-zA-Z-]+/)?(.*\.php)$ $2 [L] RewriteRule . index.php [L] Below is what I came up with: server { listen 80; server_name example.com; server_name_in_redirect off; expires 1d; access_log /srv/www/example.com/logs/access.log; error_log /srv/www/example.com/logs/error.log; root /srv/www/example.com/public; index index.html; try_files $uri $uri/ /index.html; # rewriting uploaded files rewrite ^/blogs/(.+/)?files/(.+) /blogs/wp-includes/ms-files.php?file=$2 last; # add a trailing slash to /wp-admin rewrite ^/blogs/(.+/)?wp-admin$ /blogs/$1wp-admin/ permanent; if (!-e $request_filename) { rewrite ^/blogs/(.+/)?(wp-(content|admin|includes).*) /blogs/$2 last; rewrite ^/blogs/(.+/)?(.*\.php)$ /blogs/$2 last; } location /blogs/ { index index.php; #try_files $uri $uri/ /blogs/index.php?q=$uri&$args; } location ~ \.php$ { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /srv/www/example.com/public$fastcgi_script_name; } # static assets location ~* ^.+\.(manifest)$ { access_log /srv/www/example.com/logs/static.log; } location ~* ^.+\.(ico|ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { # only set expires max IFF the file is a static file and exists if (-f $request_filename) { expires max; access_log /srv/www/example.com/logs/static.log; } } } In the above code, I believe rewrite ^/blogs/(.+/)?(.*\.php)$ /blogs/$2 last; has no effect because when I look at the access_log file, I see the following line: 2010/09/15 01:14:55 [error] 10166#0: *8 "/srv/www/example.com/public/blogs/test/index.php" is not found (2: No such file or directory), request: "GET /blogs/test/ HTTP/1.1" (Here, 'test' is the second blog created using multisite feature) What I'm expecting is that /blogs/test/index.php gets rewritten to /blogs/index.php, but it doesn't seem to do that... Am I overlooking something obvious? Thanks!

    Read the article

  • VBA + Polymorphism: Override worksheet functions from 3rd party

    - by phi
    my company makes extensive use of a data provider using a (closed source) VBA plugin. In principal, every query follows follows a certain structure: Fill one cell with a formula, where arguments to the formula specify the query the range of that formula is extended (not an arrray formula!) and cells below/right are filled with data For this to work, however, a user has to have a terminal program installed on the machine, as well as a com-plugin referenced in VBA/Excel. My Problem These Excelsheets are used and extended by multiple users, and not all of them have access to the data provider. While they can open the sheet, it will recalculate and the data will be gone. However, frequent recalculation is required. I would like every user to be able to use the sheets, without executing a very specific set of formulas. Attempts remove the reference on those computers where I do not have terminal access. This generates a NAME error i the cell containing the query (acceptable), but this query overrides parts of the data (not acceptable) If you allow the program to refresh, all data will be gone after a failed query Replace all formulas with the plain-text result in the respective cells (press a button and loop over every cell...). Obviously destroys any refresh-capabilities the querys offer for all subsequent users, so pretty bad, too. A theoretical idea, and I'm not sure how to implement it: Replace the functions offered by the plugin with something that will be called either first (and relay the query through to the original function, if thats available) or instead of the original function (by only deploying the solution on non-terminal machines), which just returns the original value. More specifically, if my query function is used like this: =GETALLDATA(Startdate, Enddate, Stockticker, etc) I would like to transparently swap the function behind the call. Do you see any hope, or am I lost? I appreciate your help. PS: Of course I'm talking about Bloomberg... Some additional points to clarify issues raise by Frank: The formula in the sheets may not be changed. This is mission-critical software, and its way too complex for any sane person to try and touch it. Only excel and VBA may be used (which is the reason for the previous point...) It would be sufficient to prevent execution of these few specific formulas/functions on a specific machine for all excel sheets to come This looks more and more like a problem for stackoverflow ;-)

    Read the article

  • Different files on shared partition?

    - by Matt Robertson
    I am dual-booting Windows 8 and Ubuntu 12.04. My partition scheme looks like this: /dev/sda1 - Windows 8 (nfts) /dev/sda2 - Ubuntu / (ext4) /dev/sda3 - Ubuntu home (ext4) /dev/sda5 - swap /dev/sda6 - Shared data partition (exfat) (First off, yes I do have exfat libraries installed on Ubuntu) I created some PNG images in Windows and saved them on my shared partition. From Ubuntu, I edited the images in GIMP and saved them (replacing the ones on the shared partition). When I boot into Windows, the files appear unchanged - exactly like they did before I edited them from Ubuntu. I even added a folder and deleted some other files, but none of these changes exist in Windows. When I boot into Ubuntu, all of the changes are still there. It is as if Windows is caching the old file structure... How is this possible? Thanks in advance. Edit -- commands output ~~ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 465.8G 0 disk +-sda1 8:1 0 165.1G 0 part +-sda2 8:2 0 21.3G 0 part / +-sda3 8:3 0 98.9G 0 part /home +-sda4 8:4 0 1K 0 part +-sda5 8:5 0 7.8G 0 part [SWAP] +-sda6 8:6 0 172.7G 0 part /mnt/shared_data ~~ /etc/fstab # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # /dev/sda2 UUID=8f700f65-b5c7-4afc-a6fb-8f9271e0fb5e / ext4 errors=remount-ro 0 1 # /dev/sda3 UUID=f0d688b7-22bd-4fa7-bc1b-a594af2933fa /home ext4 defaults 0 2 # /dev/sda5 UUID=3bc2399b-5deb-4f04-924b-d4fc77491997 none swap sw 0 0 # /dev/sda6 UUID=F2DE-BC47 /mnt/shared_data exfat defaults 0 3 ~~ /etc/mtab /dev/sda2 / ext4 rw,errors=remount-ro 0 0 proc /proc proc rw,noexec,nosuid,nodev 0 0 sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0 none /sys/fs/fuse/connections fusectl rw 0 0 none /sys/kernel/debug debugfs rw 0 0 none /sys/kernel/security securityfs rw 0 0 udev /dev devtmpfs rw,mode=0755 0 0 devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0 tmpfs /run tmpfs rw,noexec,nosuid,size=10%,mode=0755 0 0 none /run/lock tmpfs rw,noexec,nosuid,nodev,size=5242880 0 0 none /run/shm tmpfs rw,nosuid,nodev 0 0 /dev/sda3 /home ext4 rw 0 0 /dev/sda6 /mnt/shared_data fuseblk rw,nosuid,nodev,allow_other,blksize=4096 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,noexec,nosuid,nodev 0 0 gvfs-fuse-daemon /home/matt/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,user=matt 0 0

    Read the article

  • Unextending Sharepoint 2007 Web Application from a zone

    - by dunxd
    When our Sharepoint was migrated from Sharepoint 2003 to Sharepoint 2007 (both fully paid versions), the consultants who carried it out extended each web app into two IIS sites/zones (e.g. the original Web App was http://intranet, then http://newintranet and http://intranet would be created for Sharepoint 2007 - each with its own IIS site). The idea was that during the migration period we would set up DNS to point the old url to SP2003 servers and the new one to SP2007, then once the migration was complete, do a DNS change so the SP2007 would recieve the requests to the http://intranet type URLs. Unfortunately the contractors did not tidy up the application extensions and IIS sites after the migration, and for some time both URLs were in use, resulting in many document links pointing to the http://newintranet type URLs. This means I need to maintain these URLs. Due to a rejig of organisation structure we now need to relocate some Sharepoint sites, and I'd like to use the RDA Collaboration Sharepoint URL Redirector feature. However a limitation of this is that it doesn't work for Web Applications which have been extended into multiple zones. So I have a need to tidy up the situation that our consultants left behind. I think the right thing to do is use the "Remove Sharepoint from IIS Web Site" page in Central Admin to remove the zone for the newintranet type sites, and select the option to also delete the IIS site. That should result in having no IIS sites listening for http://newintranet type URLs. Is this the right procedure? Once I have done that I need to set up Sharepoint to receive requests sent to the http://newintranet type URLs so they will continue to work. I am not sure if I should do this: using Alternative Access Mappings or, by adding a host header to the IIS site or, creating a non Sharepoint IIS site for each http://newintranet type URL, and use IIS redirection to forward the requests to the new URL using variables to pass the path to the Sharepoint site. Does anyone have any thoughts on these options, or any other way of achieving this? Sharepoint 2007 is running on Windows 2003 with IIS6. We don't currently have plans/budget to upgrade to Sharepoint 2010.

    Read the article

  • WIndows 7 cannot boot - bootrec reports FS not found or corrupt

    - by purecharger
    For 3 days now I've been unable to boot into my Windows 7 partition, and all my research has been to no avail. I'm hoping someone here has more ideas on how to fix this. When I boot up now, I get the black screen with BCD error that says theres no valid file system or it may be corrupt (pardon my lack of detail, no copy/paste is available then). When I boot with the Windows 7 disc and go into repair tools, no operating system is found, and attempting to automatically repair the problem fails with Unknown Operating System (Unknown Disk) or something similar. When I drop into the command prompt, I am able to see and navigate my C:\ drive without issue. I attempt to use bootrec: C:\> bootrec /ScanOS Finds C:\Windows as a system partition. C:\> bootrec /RebuildBCD Fails with volume does not contain a recognized file system. please make sure that all required file system drivers are loaded and that the volume is not corrupted. So then I attempt to fix the bootsector: C:\> bootsect /nt60 C: /force Which completes successfully (sorry, no output..) Upon rebooting, I have the same problem. I've also tried all of the above after making my Windows partition active: C:\> diskpart DISKPART> select disk 1 DISKPART> select partition 1 DISKPART> active DISKPART> exit Then bootrec as above, both with and without a reboot after the DISKPART commands. Then I've also tried rebuilding the BCD store by hand: set systemdrive=C: set tempbcd=C:\boot\bcd.temp set tempfile=C:\boot\temp.txt bcdedit -createstore %tempbcd% bcdedit.exe -store %tempbcd% -create {bootmgr} -d "Windows Boot Manager" bcdedit -store %tempbcd% -create -d "Windows Vista" -application osloader>%tempfile% set /p winvistaguid= <%tempfile% set winvistaguid=%winvistaguid:~10,38% bcdedit -store %tempbcd% -set %winvistaguid% osdevice partition=%systemdrive% bcdedit -store %tempbcd% -set %winvistaguid% device partition=%systemdrive% bcdedit -store %tempbcd% -set %winvistaguid% path \Windows\system32\winload.exe bcdedit -store %tempbcd% -set %winvistaguid% systemroot \Windows bcdedit -import %tempbcd% However on the import, I get my familiar friendly message: volume does not contain a recognized file system. please make sure that all required file system drivers are loaded and that the volume is not corrupted I'm at my wits end here, and I cannot understand why Windows refuses to see this as a valid install. When I list the disk/partition in DISKPART, it shows up as NTFS and "Healthy", and I can navigate the directory structure from DOS with no problems. I really, really do not want to reformat and reinstall. I know this problem can be solved!

    Read the article

  • WIndows 7 cannot boot - bootrec reports FS not found or corrupt

    - by purecharger
    For 3 days now I've been unable to boot into my Windows 7 partition, and all my research has been to no avail. I'm hoping someone here has more ideas on how to fix this. When I boot up now, I get the black screen with BCD error that says theres no valid file system or it may be corrupt (pardon my lack of detail, no copy/paste is available then). When I boot with the Windows 7 disc and go into repair tools, no operating system is found, and attempting to automatically repair the problem fails with Unknown Operating System (Unknown Disk) or something similar. When I drop into the command prompt, I am able to see and navigate my C:\ drive without issue. I attempt to use bootrec: C:\> bootrec /ScanOS Finds C:\Windows as a system partition. C:\> bootrec /RebuildBCD Fails with volume does not contain a recognized file system. please make sure that all required file system drivers are loaded and that the volume is not corrupted. So then I attempt to fix the bootsector: C:\> bootsect /nt60 C: /force Which completes successfully (sorry, no output..) Upon rebooting, I have the same problem. I've also tried all of the above after making my Windows partition active: C:\> diskpart DISKPART> select disk 1 DISKPART> select partition 1 DISKPART> active DISKPART> exit Then bootrec as above, both with and without a reboot after the DISKPART commands. Then I've also tried rebuilding the BCD store by hand: set systemdrive=C: set tempbcd=C:\boot\bcd.temp set tempfile=C:\boot\temp.txt bcdedit -createstore %tempbcd% bcdedit.exe -store %tempbcd% -create {bootmgr} -d "Windows Boot Manager" bcdedit -store %tempbcd% -create -d "Windows Vista" -application osloader>%tempfile% set /p winvistaguid= <%tempfile% set winvistaguid=%winvistaguid:~10,38% bcdedit -store %tempbcd% -set %winvistaguid% osdevice partition=%systemdrive% bcdedit -store %tempbcd% -set %winvistaguid% device partition=%systemdrive% bcdedit -store %tempbcd% -set %winvistaguid% path \Windows\system32\winload.exe bcdedit -store %tempbcd% -set %winvistaguid% systemroot \Windows bcdedit -import %tempbcd% However on the import, I get my familiar friendly message: volume does not contain a recognized file system. please make sure that all required file system drivers are loaded and that the volume is not corrupted I'm at my wits end here, and I cannot understand why Windows refuses to see this as a valid install. When I list the disk/partition in DISKPART, it shows up as NTFS and "Healthy", and I can navigate the directory structure from DOS with no problems. I really, really do not want to reformat and reinstall. I know this problem can be solved!

    Read the article

  • WordPress 3.5 Multisite and nginx siteurl issues

    - by Florin Gogianu
    I'm setting up multisite on localhost in subdirectories. The problem is that when I'm trying to access the dashboard of a site I just created ( localhost/wptest/site/wp-admin ) I get "This webpage has a redirect loop" and when I try to access the actual website ( localhost/wptest/site ) the page loads but without assets, such as css. When I access the network dashboard, or the primary site dashboard on localhost/wptest everything is just fine. Also when I edit the permalink of the second site in the network dashboard, to be like this: localhost/site it also runs fine. How to make it work with the default permalink structure localhost/wptest/site? The wordpress files are in /usr/share/html/wptest The wp-config.php is as follows: define('WP_ALLOW_MULTISITE', true); define('MULTISITE', true); define('SUBDOMAIN_INSTALL', false); define('DOMAIN_CURRENT_SITE', 'localhost'); define('PATH_CURRENT_SITE', '/wptest/'); define('SITE_ID_CURRENT_SITE', 1); define('BLOG_ID_CURRENT_SITE', 1); And the server block / virtual host is like this: server { ##DM - uncomment following line for domain mapping listen 80 default_server; #server_name example.com *.example.com ; ##DM - uncomment following line for domain mapping #server_name_in_redirect off; access_log /var/log/nginx/example.com.access.log; error_log /var/log/nginx/example.com.error.log; root /usr/share/nginx/html/wptest; index index.html index.htm index.php; if (!-e $request_filename) { rewrite /wp-admin$ $scheme://$host$uri/ permanent; rewrite ^(/[^/]+)?(/wp-.*) $2 last; rewrite ^(/[^/]+)?(/.*\.php) $2 last; } location / { try_files $uri $uri/ /index.php?$args ; } location ~ \.php$ { try_files $uri /index.php; include fastcgi_params; fastcgi_pass unix:/var/run/php5-fpm.sock; } location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { access_log off; log_not_found off; expires max; } location = /robots.txt { access_log off; log_not_found off; } location ~ /\. { deny all; access_log off; log_not_found off; } } And finally here's an error log: 2013/06/29 08:05:37 [error] 4056#0: *52 rewrite or internal redirection cycle while internally redirecting to "/index.php", client: 127.0.0.1, server: example.com, request: "GET /nginx HTTP/1.1", host: "localhost"

    Read the article

  • Tool or script to detect moved or renamed files on Linux prior to a backup

    - by Pharaun
    Basically I am searching to see if there exists a tool or script that can detect moved or renamed files so that I can get a list of renamed/moved files and apply the same operation on the other end of the network to conserve on bandwidth. Basically disk storage is cheap but bandwidth isn't, and the problem is that the files often will be reorganized or moved around into a better directory structure thus when you use rsync to do the backup, rsync won't notice that its a renamed or moved file and re-transmission it over the network all over again despite having the same file on the other end. So I am wondering if there exists a script or tool that can record where all the files are and their names, then just prior to a backup, it would rescan and detect moved or renamed files, then I can take that list and re-apply the move/rename operation on the other side. Here's a list of the "general" features of the files: Large unchanging files They can be renamed or moved around [Edit:] These all are good answers, and what I end up doing in the end was looking at all of the answers and will be writing some code to deal with this. Basically what I am thinking/working on now is: Using something like AIDE for the "initial" scan and enable me to keep checksums on the files because they are supposed to never change, so it would aid on detecting corruption. Creating an inotify daemon that would monitor these files/directory and recording any changes relating to renames & moving the files around to a log file. There are some edge cases where inotify might fail to record that something happened to the file system, thus there is a final step of using find to search the file system for files that has a change time latter than the last backup. This has several benefits: Checksums/etc from AIDE to be able to check/make sure that some media did not get corrupt Inotify keeps resource usage low and no need to re-scan the filesystem over and over No need to patch rsync; If I have to patch things I can, but I would prefer to avoid patching things to keep the burden lower, (IE don't need to re-patch everytime there is an update). I've used Unison before and its really nice, however I could've sworn that Unison does keep copies around on the filesystem and that its "archive" files can grow to be rather large?

    Read the article

  • How do I (robustly) remotely execute tasks on Windows workstations in a domain?

    - by Zac B
    I'm not even sure if "robustly" is a word. Anyway. Context: We have a few hundred Windows 7 workstations on a LAN. We use AD/GPO management pretty heavily, but there are a lot of periodic and/or manual maintenance tasks we need to do that can't be done via GPO/scheduled task. For example, say I want to execute program X (which runs silently, in the background, and doesn't bother the user) on workstation Y, or say I want to execute task A on a workstation group B either on a schedule or on demand. Kicking the users off of their computers to do this (i.e. using RDP) is a no-no, and doesn't work on groups anyway. Question: What's the best way to do this that is robust enough that, after setup, I could give it to beginner support people (read: people who are phobic of the command line, and get confused with GUI interfaces more complicated than Firefox)? I'm a competent programmer, and, if there is a robust set of tools or framework out there for this type of task, I'd consider hacking something together myself if it didn't take too long. If there's some combination of tools or techniques that others use to make remote-workstation-administration doable by beginners, I have yet to find it. For those who care about the "why": I'm midlevel IT, and was told to implement a remote management solution that allows arbitrary/scheduled remote execution, with confirmation that programs actually ran remotely, and the ability to view what they returned. "Why?" I asked, "Can't I just use PsExec and the task scheduler on a dispatcher machine?" "No," I was told, "'Joe' the second-week tech is going to be in charge of this one, and he needs something simple with a GUI." What I've tried: I've played with making a bunch of one-clickable "transfer files to remote computer and run them with PsExec" batch/VB scrips, but those tend to break down and don't easily support running on customizable groups. I've played a little bit with the Windows version of Puppet, but it doesn't support arbitrary-time remote execution (it's ability to group computers into a tree/node structure is really nice though). I've used an older version of Altiris, and, while it does a lot of what I want, it's interface is awful, it's slow, crashes a lot, and is probably too expensive for management. SwiftWater's DMS solution does some of what I want, but it's very underdeveloped, closed-source (not a deal breaker but not ideal), and I get the impression that support and reliability are lacking.

    Read the article

  • How to Remove Extensions From, and Force the Trailing Slash at the End of URLs?

    - by Kronbernkzion
    Example of current file structure: example.com/foo.php example.com/bar.html example.com/directory/ example.com/directory/foo.php example.com/directory/bar.html example.com/cgi-bin/directory/foo.cgi I want to remove HTML, PHP and CGI extensions from, and then force the trailing slash at the end of URLs. So, it could look like this: example.com/foo/ example.com/bar/ example.com/directory/ example.com/directory/foo/ example.com/directory/bar/ example.com/cgi-bin/directory/foo/ I've searched for solution for 17 hours straight and visited more than a few hundred pages on various blogs and forums. I'm not joking. So I think I've done my research. Here is the code that sits in my .htaccess file right now: RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME}\.html -f RewriteRule ^(([^/]+/)*[^./]+)/$ $1.html RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !(\.[a-zA-Z0-9]|/)$ RewriteRule (.*)$ /$1/ [R=301,L] As you can see, this code only removes .html (and I'm not very happy with it because I think it could be done a lot simpler). I can remove the extension from PHP files when I rename them to .html through .htaccess, but that's not what I want. I want to remove it straight. This is the first thing I don't know how to do. The second thing is actually very annoying. My .htaccess file with code above, adds .html/ to every string entered after example.com/directory/foo/. So if I enter example.com/directory/foo/bar (obviously /bar doesn't exist since foo is a file), instead of just displaying message that page is not found, it converts it to example.com/directory/foo/bar.html/, then searches for a file for a few seconds and then displays the not found message. This, of course, is bad behavior. So, once again, I need the code in .htaccess to do the following things: Remove .html extension Remove .php extension Remove .cgi extension Force the trailing slash at the end of URLs Requests should behave correctly (no adding trailing slashes or extensions to strings if file or directory doesn't exist on server) Code should be as simple as possible I would very much appreciate any help. And to first person that gives me the solution, I'll send two $50 iTunes Store gift cards for US store. If this offend anyone, I am truly sorry and I apologize. Thanks in advance. And sorry for such a long post.

    Read the article

  • Windows Server 2008 R2 loses ability to connect to network share

    - by JamesB
    I could sure use some help with this one: I've got two Windows Server 2008 R2 x64 Terminal Servers, as well as several 2003 servers (DNS / Wins / AD / DC). On the two 2008 boxes, every now and then they will get in this mode where you can't map a drive to a random server. I say random server because it's not always the same server that you can't map to. Here is a summary of what I can and can't do: net view \\servername Sometimes this works, sometimes it does not. net view \\FQDN This always works. net view \\IPAddress This always works. ping servername Sometimes this works, sometimes it does not. ping FQDN This always works. ping IPAddress This always works. I've been looking all over for a solution to this. It sure seems like Microsoft would have a hotfix by now. The kicker to this is that it sometimes works great, especially after a reboot. It may run for 2 weeks just fine, but all of a sudden it will fail to resolve the remote server name. It will then be this way for a few days, then it might start working again. Also, while it's in the mode of not working, the other servers have no problem getting there. It's just these 2008 R2 Terminal Servers. Setting a static entry in the Hosts file and LMHosts does not make it work. All servers have static IPs and they are registered in DNS and Wins just fine. Here is a long thread on MS Technet of the exact same problem, but they don't have a good solution. Here is their workaround (It was from June of 2010): Good news - a hotfix is in the works and a workaround has been identified: Root cause is that since this is SMB1 all user sessions are on a single TCP connection to the remote server. The first user to initiate a connection to the remote SMB server has their logon-ID added to the structure defining the connection. If that user logs off all subsequent uses of that TCP session fail as the logon-id is no longer valid. As a workaround for now to keep the issue from happening you will want to have the user not logoff the Terminal Server only disconnect their sessions. Any word from anyone out there about a solution? Any help would sure be appreciated. Thanks, James

    Read the article

  • Subdomains, folders, internationalization, and hosting solutions

    - by justinbach
    I'm a web developer and I recently landed a gig to develop the US / international version of a site for a company that's big in Europe but hasn't done much expansion into the US yet. They've got an existing site at company.com, which should remain visible to European customers after the new site goes up, and an existing (not great) site at company.us, which I'm going to be redeveloping (the .us site will be taken down when my version goes up--keep reading for details). My solution needs to take into account the fact that there are going to be new, localized versions of the site in the fairly near future, so the framework I'm writing needs to be able to handle localizations fairly easily (dynamically load language packs, etc). The tricky thing is the European branch of the company manages the .com site hosting (IIS-based) and the DNS, while I'll be managing the US hosting (and future localizations), which will likely be apache-based. I've never been a big fan of the ".us" TLD--I think most US users are accustomed to visiting the .com--so the thought is that the European branch will detect the IP of inbound traffic and redirect all US-based addresses to us.example.com (or whatever the appropriate localized subdomain might be), which would point to the IP address of my host. I'd then serve the appropriate locale-specific content by pulling the subdomain from the $_SERVER superglobal (assuming PHP). I couldn't find any examples of international organizations that take a subdomain-based approach for localization, but I'm not sure I have any other options as a result of the unique hosting structure here (in that there's not a unified hosting solution for the European and US sites). In my experience, the US version of an international site would live at domain.com/us, not at us.domain.com, and I'd imagine that this has to do with SEO (subdomains are treated as separate sites, so improved rankings for the US site wouldn't help the Canadian version if subdomains are used to differentiate between them). My question is: is there a better approach to solving this problem than the one I'm taking? Ideally, I'd like to use a folder-based approach (see adidas.com as an example of what I'm talking about), but I'm not sure that's a possibility given that the US site (and other localizations) will not be hosted on the same server as the rest of the .com. Can you, in IIS, map a folder (e.g. domain.com/us) to a different IP address? What would you recommend? Thanks for your consideration.

    Read the article

  • Routing between 2 different subnets on 2 different interfaces in SonicOS

    - by Chris1499
    I'm having a bit of a problem allowing traffic between two of my subnets. Here's the structure I've built. The X0 interface has our windows server on it and it handles DHCP/DNS, etc. X1 has the WAN connection. The Sonicwall is handling DHCP on X2. The X3 interface is connected to a different vlan on the 48 port switch. The Sonicwall is handling DHCP on this network as well. So here's what i want to do. The network on X2 is for our guest wireless; i don't want it to be able to access any of the other networks, just the internet, so i that all blocked in the firewall. No issues there. The X3 network is going to be for programmable controllers, and needs to be able to access the X0 network where our computers are. This is where my problem is. I'm not able to get between the 192.168.2.xxx and the 192.168.1.xxx on interfaces X0 and X3 respectively. I have these rules set up in the firewall. The Lan Primary Subnet is the 192.168.2.0 on X0. So if i'm not mistaken, this will allow traffic between the two through the firewall. Now this is where I'm a little confused. Do i need to use NAT to get the traffic from X0 to go to X3 (and vice versa), or a static route, or both? Currently i have both, though i doubt they're done correctly (also in screenshot). I've tried to ping between the two without luck. Any advice, or if you see what's wrong with my setup, is much appreciated. If you need some more information, let me know. Thanks all! EDIT: So i found that i don't neither either NAT or a static route, that the setting in the firewall is enough. I can now ping from the 192.168.1.xxx network, however i can't access the server on the 192.168.2.xxx network. When i try to access i get "An error occured while reconnecting to Z: to server Microsoft Windows Network: The local device name is already in use. This connection has not been restored. What am i missing?

    Read the article

  • Nginx not working properly on subdomains [SOLVED]

    - by javipas
    I've been trying to setup a Sugar CRM instance. I've got a domain that has its main site on a server (www.domain.com) and I've created a subdomain (sugar.domain.com), but I wnat this subdomain to be hosted on another server. This second server has nginx installed, and there's a working WordPress blog there on a virtualhost, so I would need to setup a second site. To do this I've created the directory structure, and I've created a /etc/nginx/sites-enabled/sugar.domain.com configuration file that has the following: * server { listen 80; server_name sugar.domain.com *.domain.com; access_log /var/www/sugar/log/access.log; error_log /var/www/sugar/log/error.log info; location / { root /var/www/sugar; index index.php; } location ~ .php$ { fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_pass backend; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/sugar/$fastcgi_script_name; include fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_intercept_errors on; fastcgi_ignore_client_abort on; fastcgi_read_timeout 180; } ## Disable viewing .htaccess & .htpassword location ~ /\.ht { deny all; } } upstream backend { server 127.0.0.1:9000; } As far as I know, I need the *.domain.com parameter on the "server_name" flag, but something is crashing here: I get either a 403 Forbidden error, or I get PHP code (I can read the PHP file code in the browser, like normal text) that somehow is not executed. I've tried setting permissions to 755 inside the /var/www/sugar/ directory, and I've also set up the owner:group with a chown -R www-data:www-data /var/www/sugar/ The thing is, I don't now if my mistake is in the nginx site configuration, in my folder permissions, or in other place :( Could it be because of the main domain (www.domain.com) is hosted on other server? Do they have to be together necessarily?

    Read the article

  • Nginx not working properly on subdomains

    - by javipas
    I've been trying to setup a Sugar CRM instance. I've got a domain that has its main site on a server (www.domain.com) and I've created a subdomain (sugar.domain.com), but I wnat this subdomain to be hosted on another server. This second server has nginx installed, and there's a working WordPress blog there on a virtualhost, so I would need to setup a second site. To do this I've created the directory structure, and I've created a /etc/nginx/sites-enabled/sugar.domain.com configuration file that has the following: * server { listen 80; server_name sugar.domain.com *.domain.com; access_log /var/www/sugar/log/access.log; error_log /var/www/sugar/log/error.log info; location / { root /var/www/sugar; index index.php; } location ~ .php$ { fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_pass backend; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/sugar/$fastcgi_script_name; include fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_intercept_errors on; fastcgi_ignore_client_abort on; fastcgi_read_timeout 180; } ## Disable viewing .htaccess & .htpassword location ~ /\.ht { deny all; } } upstream backend { server 127.0.0.1:9000; } As far as I know, I need the *.domain.com parameter on the "server_name" flag, but something is crashing here: I get either a 403 Forbidden error, or I get PHP code (I can read the PHP file code in the browser, like normal text) that somehow is not executed. I've tried setting permissions to 755 inside the /var/www/sugar/ directory, and I've also set up the owner:group with a chown -R www-data:www-data /var/www/sugar/ The thing is, I don't now if my mistake is in the nginx site configuration, in my folder permissions, or in other place :( Could it be because of the main domain (www.domain.com) is hosted on other server? Do they have to be together necessarily?

    Read the article

  • Why would a process monitoring script use exit 1; on finding no problems?

    - by user568458
    General question: On a Linux (Centos) server, if a process monitoring script run by cron is set to close with exit 1; rather than exit 0; on finding that everything is okay and that no action is needed, is that a mistake? Or are there legitimate reasons for calling exit 1; instead of exit 0; on the "Everything's fine, no action needed" condition? exit 0; on finding no problems seems to me to be more appropriate. But maybe there's something I'm not aware of. For example, maybe there's something specific to Cron? Or maybe there's a convention in process monitoring scripts that 'failure' means 'this script failed to need to fix a problem' (rather than what I would expect which is that exit 1; would mean 'the process being monitored has failed'?) My specific case: I'm looking at a process monitoring script written by my web hosting company. By process monitoring script, I mean a script executed by Cron on a regular basis that checks if an important system process is running, and if it isn't running, takes actions such as mailing an administrator or restarting the process. Here's the (generalised) structure of their script, for a service running on port 8080 (in this case, Apache Tomcat): SERVICE=$(/usr/sbin/lsof -i tcp:8080 | wc -l); if [ $SERVICE != 0 ]; then exit 1; else #take action fi Seems simple enough even for someone with limited knowledge like me, except the exit 1; part seems odd. As I understand it, exit 0; closes a program and signifies to the parent that executed the program that everything is fine, exit n; where n0 and n<127 signifies that there has been some kind of error or problem. Here, their script seems to go against that rule - it calls exit 1; in the condition where everything is fine, and doesn't exit after taking remedial action in the problem condition. To me, this looks like a mistake - but my experience in this area is limited. Are there cases where calling exit 1; in the "Everything's fine, no action needed" condition is more appropriate than calling exit 0;? Or is it a mistake? Wider context is pretty simple. It's a Centos VPS, running Plesk. The script is being called by Cron via Plesk's "Scheduled tasks" Cron manager. There's no custom layer between Cron and this script that would respond in an unusual way to the exit call. It's a fairly average, almost out-of-the box Plesk-managed Centos VPS (in so far as there is such a thing). The process being monitored by this script is Apache Tomcat.

    Read the article

  • USB hard drive not recognized

    - by user318772
    Until recently I was using the portable USB hard drive in my win 7 laptop and ubuntu laptop. Suddenly now none of the laptops recognize it. This is the message i get by doing lsusb... Bus 001 Device 004: ID 1058:1010 Western Digital Technologies, Inc. Elements External HDD Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 003: ID 0b97:7762 O2 Micro, Inc. Oz776 SmartCard Reader Bus 003 Device 002: ID 0b97:7761 O2 Micro, Inc. Oz776 1.1 Hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 002: ID 413c:a005 Dell Computer Corp. Internal 2.0 Hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub fdisk doesn't show the external hard drive Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders, total 156301488 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0004a743 Device Boot Start End Blocks Id System /dev/sda1 * 2048 152111103 76054528 83 Linux /dev/sda2 152113150 156301311 2094081 5 Extended /dev/sda5 152113152 156301311 2094080 82 Linux swap / Solaris when i do testdisk TestDisk 6.14, Data Recovery Utility, July 2013 Christophe GRENIER <[email protected]> http://www.cgsecurity.org TestDisk is free software, and comes with ABSOLUTELY NO WARRANTY. Select a media (use Arrow keys, then press Enter): >Disk /dev/sda - 80 GB / 74 GiB - ST980825AS Disk /dev/sdb - 2199 GB / 2048 GiB testdisk-> Intel->analyse I get partition error Disk /dev/sdb - 2199 GB / 2048 GiB - CHS 2097152 64 32 Current partition structure: Partition Start End Size in sectors Partition: Read error Here is the output of dmesg [11948.549171] Add. Sense: Invalid command operation code [11948.549177] sd 2:0:0:0: [sdb] CDB: [11948.549181] Read(16): 88 00 00 00 00 00 00 00 00 00 00 00 00 08 00 00 [11948.550489] sd 2:0:0:0: [sdb] Invalid command failure [11948.550495] sd 2:0:0:0: [sdb] [11948.550499] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [11948.550505] sd 2:0:0:0: [sdb] [11948.550508] Sense Key : Illegal Request [current] [11948.550514] Info fld=0x0 [11948.550519] sd 2:0:0:0: [sdb] [11948.550525] Add. Sense: Invalid command operation code [11948.550531] sd 2:0:0:0: [sdb] CDB: [11948.550534] Read(16): 88 00 00 00 00 00 00 00 00 00 00 00 00 08 00 00 [11948.551870] sd 2:0:0:0: [sdb] Invalid command failure [11948.551876] sd 2:0:0:0: [sdb] [11948.551880] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [11948.551885] sd 2:0:0:0: [sdb] [11948.551888] Sense Key : Illegal Request [current] [11948.551895] Info fld=0x0 [11948.551900] sd 2:0:0:0: [sdb] [11948.551905] Add. Sense: Invalid command operation code [11948.551911] sd 2:0:0:0: [sdb] CDB: [11948.551914] Read(16): 88 00 00 00 00 00 00 00 00 00 00 00 00 08 00 00 If possible i want to retrive at least some data from this hard drive. If thats not possible I would like to format it and use it. Any help will be greatly appreciated Thanks

    Read the article

  • Is there a way to replicate a very large file shares in real-time?

    - by fsckin
    I have an hourly cron job that copies about 40GB of data from a source folder into a new folder with the hour appended on the end. When it's done, the job prunes anything older than 24 hours. This data changes very often during work hours and is on a samba file share. Here's how the folder structure looks: \server\Version.1 \server\Version.2 \server\Version.3 ... \server\Version.24 The contents of each new folder compared to the last one usually doesn't change very much, since this is a hourly job. Now you might be thinking that I'm an idiot for setting dreaming this up. Truth is, I just found out. It's actually been used for years and is so incredibly simple, anyone could delete the ENTIRE 40GB share (imagine that dialog spooling up... deleting thousands and thousands of files) and it would actually be faster to restore by moving the latest copy back to the source than it took to delete. Brilliant! Now to top this off, I need to efficiently replicate this 960GB of "mostly similar" data to a remote server over WAN link, with the replication happening as close to real-time as possible -- think hot spare, disaster recovery, etc. My first thought was rsync. Total failure. Rsync sees it sees a deletion of the folder that is 24 hours old and the addition of a new folder with 30GB of data to sync! I also looked at rdiff-backup and unison, they both appear to use similar algorithms and do not keep enough meta-data to do this intelligently. Best thing that I can find "out of the box" to do this is Windows Server "Distributed Filesystem Replication" which uses "Remote Differential Compression" -- After reading the background information on how this works, it actually looks like exactly what I need. Problem: Both servers are running Linux. D'oh! One approach to this I'm looking at is this, say it's 5AM and the cron job finishes: New Version.5 folder arrives at on local server SSH to remote server and copy Version.4 to Version.5 Run rsync on the local server pushing changes to the remote server. Rsync finally knows to do a differential copy between Version.4 and Version.5 Is there a smarter way to replicate Samba shares as close to real-time as possible? Anything out there that does "Remote Differential Compression" on Linux?

    Read the article

  • Proper set up shared folders for users

    - by user221486
    First I would like to say thanks for helping, and I have huge problem with proper set up permission for shared folders. I have Windows 7 x64 ent. - name: backupfb - added to domain with shared folder on drive e: (e:\backup) 50 clients/laptops with TSM Tivoli fastback for workstations who save files on shared folder And I need to configure proper permission for my shared folders that only owner of folder can access to their folders. Folder structure is: e:\backup <- shared as a "backup" folder \\backupfb\backup\ e:\backup\BackupAdmin <-- directory is used by the Tivoli Storage Manager FastBack for Workstations client to download revisions and configurations. Nodes require read-only access to these directories e:\backup\RealTimeBackup <-- enable client accounts to create directories that are only accessible by the account that created them. As a result, the directory that contains data for a node is not created until that node connects to the server. So permission should look like that (take from instructions): Inheritable permissions from object`s parents are DISABLE Permission entries: \\backupfb\backup\BackupAdmin Allow Users Read, Execute This folder, subfolders, and files Traverse Folder / Execute Allow List Folder / Read Data Allow Read Attributes Allow Read Extended Attributes Allow Delete subfolders and files Allow Delete Allow Read Permission’s Allow Allow Administrators Full Control This folder, subfolders, and files Both folders have enabled option "apply these permissions to objects and/or containers within this container only" Here everything works fine \\backupfb\backup\RealTimeBackup <<-- Allow Administrators Full Control This folder, subfolders, and files Allow CREATOR OWNER Full Control This folder, subfolders, and files (from domain) Allow Users Special This folder only Traverse Folder / Execute Allow List Folder / Read Data Allow Read Attributes Allow Read Extended Attributes Allow Create Files / Write Data Allow Create Folders / Append Data Allow Delete subfolders and files Allow Read Permission’s Allow Allow OWNER RIGHTS* Full Control This folder, subfolders, and files Here I have huge problem with CREATOR OWNER Im able to set FULL CONTROL but I can only apply "Subfolders and files only". When I change props. to "This folder, subfolders and files" and save its change to "Subfolders and files only" So I try use icacls to set up permissions @echo off takeown /F E:\backup\ /R /A for /D %%i IN (E:\backup\RealTimeBackup*) DO icacls E:\backup\RealTimeBackup\%%~nxi /grant:r cloud\%%~nxi:F /T /C pause but after that user are able to create just one folder in \backupfb\backup\RealTimeBackup\userfolder but problem is with subfolders In log i have: FBW5022E Unable to access the specified file Explanation: The file specified is unable to be accessed. Possibly spelled incorrectly, or bad path, or permissions. User response: Ensure the user has the proper permissions for the file and directories involved andthat the file and directory exist Any idea ?? pls help ;-) thanks

    Read the article

  • Complete Guide to Networking Windows 7 with XP and Vista

    - by Mysticgeek
    Since there are three versions of Windows out in the field these days, chances are you need to share data between them. Today we show how to get each version to be share files and printers with one another. In a perfect world, getting your computers with different Microsoft operating systems to network would be as easy as clicking a button. With the Windows 7 Homegroup feature, it’s almost that easy. However, getting all three of them to communicate with each other can be a bit of a challenge. Today we’ve put together a guide that will help you share files and printers in whatever scenario of the three versions you might encounter on your home network. Sharing Between Windows 7 and XP The most common scenario you’re probably going to run into is sharing between Windows 7 and XP.  Essentially you’ll want to make sure both machines are part of the same workgroup, set up the correct sharing settings, and making sure network discovery is enabled on Windows 7. The biggest problem you may run into is finding the correct printer drivers for both versions of Windows. Share Files and Printers Between Windows 7 & XP  Map a Network Drive Another method of sharing data between XP and Windows 7 is mapping a network drive. If you don’t need to share a printer and only want to share a drive, then you can just map an XP drive to Windows 7. Although it might sound complicated, the process is not bad. The trickiest part is making sure you add the appropriate local user. This will allow you to share the contents of an XP drive to your Windows 7 computer. Map a Network Drive from XP to Windows 7 Sharing between Vista and Windows 7 Another scenario you might run into is having to share files and printers between a Vista and Windows 7 machine. The process is a bit easier than sharing between XP and Windows 7, but takes a bit of work. The Homegroup feature isn’t compatible with Vista, so we need to go through a few different steps. Depending on what your printer is, sharing it should be easier as Vista and Windows 7 do a much better job of automatically locating the drivers. How to Share Files and Printers Between Windows 7 and Vista Sharing between Vista and XP When Windows Vista came out, hardware requirements were intensive, drivers weren’t ready, and sharing between them was complicated due to the new Vista structure. The sharing process is pretty straight-forward if you’re not using password protection…as you just need to drop what you want to share into the Vista Public folder. On the other hand, sharing with password protection becomes a bit more difficult. Basically you need to add a user and set up sharing on the XP machine. But once again, we have a complete tutorial for that situation. Share Files and Folders Between Vista and XP Machines Sharing Between Windows 7 with Homegroup If you have one or more Windows 7 machine, sharing files and devices becomes extremely easy with the Homegroup feature. It’s as simple as creating a Homegroup on on machine then joining the other to it. It allows you to stream media, control what data is shared, and can also be password protected. If you don’t want to make your Windows 7 machines part of the same Homegroup, you can still share files through the Public Folder, and setup a printer to be shared as well.   Use the Homegroup Feature in Windows 7 to Share Printers and Files Create a Homegroup & Join a New Computer To It Change which Files are Shared in a Homegroup Windows Home Server If you want an ultimate setup that creates a centralized location to share files between all systems on your home network, regardless of the operating system, then set up a Windows Home Server. It allows you to centralize your important documents and digital media files on one box and provides easy access to data and the ability to stream media to other machines on your network. Not only that, but it provides easy backup of all your machines to the server, in case disaster strikes. How to Install and Setup Windows Home Server How to Manage Shared Folders on Windows Home Server Conclusion The biggest annoyance is dealing with printers that have a different set of drivers for each OS. There is no real easy way to solve this problem. Our best advice is to try to connect it to one machine, and if the drivers won’t work, hook it up to the other computer and see if that works. Each printer manufacturer is different, and Windows doesn’t always automatically install the correct drivers for the device. We hope this guide helps you share your data between whichever Microsoft OS scenario you might run into! Here are some other articles that will help you accomplish your home networking needs: Share a Printer on a Home Network from Vista or XP to Windows 7 How to Share a Folder the XP Way in Windows Vista Similar Articles Productive Geek Tips Delete Wrong AutoComplete Entries in Windows Vista MailSvchost Viewer Shows Exactly What Each svchost.exe Instance is DoingFixing "BOOTMGR is missing" Error While Trying to Boot Windows VistaShow Hidden Files and Folders in Windows 7 or VistaAdd Color Coding to Windows 7 Media Center Program Guide TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Icelandic Volcano Webcams Open Multiple Links At One Go NachoFoto Searches Images in Real-time Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi

    Read the article

  • MVC Portable Areas Enhancement &ndash; Embedded Resource Controller

    - by Steve Michelotti
    MvcContrib contains a feature called Portable Areas which I’ve recently blogged about. In short, portable areas provide a way to distribute MVC binary components as simple .NET assemblies where the aspx/ascx files are actually compiled into the assembly as embedded resources. This is an extremely cool feature but once you start building robust portable areas, you’ll also want to be able to access other external files like css and javascript.  After my recent post suggesting portable areas be expanded to include other embedded resources, Eric Hexter asked me if I’d like to contribute the code to MvcContrib (which of course I did!). Embedded resources are stored in a case-sensitive way in .NET assemblies and the existing embedded view engine inside MvcContrib already took this into account. Obviously, we’d want the same case sensitivity handling to be taken into account for any embedded resource so my job consisted of 1) adding the Embedded Resource Controller, and 2) a little refactor to extract the logic that deals with embedded resources so that the embedded view engine and the embedded resource controller could both leverage it and, therefore, keep the code DRY. The embedded resource controller targets these scenarios: External image files that are referenced in an <img> tag External files referenced like css or JavaScript files Image files referenced inside css files Embedded Resources Walkthrough This post will describe a walkthrough of using the embedded resource controller in your portable areas to include the scenarios outlined above. I will build a trivial “Quick Links” widget to illustrate the concepts. The portable area registration is the starting point for all portable areas. The MvcContrib.PortableAreas.EmbeddedResourceController is optional functionality – you must opt-in if you want to use it.  To do this, you simply “register” it by providing a route in your area registration that uses it like this: 1: context.MapRoute("ResourceRoute", "quicklinks/resource/{resourceName}", 2: new { controller = "EmbeddedResource", action = "Index" }, 3: new string[] { "MvcContrib.PortableAreas" }); First, notice that I can specify any route I want (e.g., “quicklinks/resources/…”).  Second, notice that I need to include the “MvcContrib.PortableAreas” namespace as the fourth parameter so that the framework is able to find the EmbeddedResourceController at runtime. The handling of embedded views and embedded resources have now been merged.  Therefore, the call to: 1: RegisterTheViewsInTheEmmeddedViewEngine(GetType()); has now been removed (breaking change).  It has been replaced with: 1: RegisterAreaEmbeddedResources(); Other than that, the portable area registration remains unchanged. The solution structure for the static files in my portable area looks like this: I’ve got a css file in a folder called “Content” as well as a couple of image files in a folder called “images”. To reference these in my aspx/ascx code, all of have to do is this: 1: <link href="<%= Url.Resource("Content.QuickLinks.css") %>" rel="stylesheet" type="text/css" /> 2: <img src="<%= Url.Resource("images.globe.png") %>" /> This results in the following HTML mark up: 1: <link href="/quicklinks/resource/Content.QuickLinks.css" rel="stylesheet" type="text/css" /> 2: <img src="/quicklinks/resource/images.globe.png" /> The Url.Resource() method is now included in MvcContrib as well. Make sure you import the “MvcContrib” namespace in your views. Next, I have to following html to render the quick links: 1: <ul class="links"> 2: <li><a href="http://www.google.com">Google</a></li> 3: <li><a href="http://www.bing.com">Bing</a></li> 4: <li><a href="http://www.yahoo.com">Yahoo</a></li> 5: </ul> Notice the <ul> tag has a class called “links”. This is defined inside my QuickLinks.css file and looks like this: 1: ul.links li 2: { 3: background: url(/quicklinks/resource/images.navigation.png) left 4px no-repeat; 4: padding-left: 20px; 5: margin-bottom: 4px; 6: } On line 3 we’re able to refer to the url for the background property. As a final note, although we already have complete control over the location of the embedded resources inside the assembly, what if we also want control over the physical URL routes as well. This point was raised by John Nelson in this post. This has been taken into account as well. For example, suppose you want your physical url to look like this: 1: <img src="/quicklinks/images/globe.png" /> instead of the same corresponding URL shown above (i.e., “/quicklinks/resources/images.globe.png”). You can do this easily by specifying another route for it which includes a “resourcePath” parameter that is pre-pended. Here is the complete code for the area registration with the custom route for the images shown on lines 9-11: 1: public class QuickLinksRegistration : PortableAreaRegistration 2: { 3: public override void RegisterArea(System.Web.Mvc.AreaRegistrationContext context, IApplicationBus bus) 4: { 5: context.MapRoute("ResourceRoute", "quicklinks/resource/{resourceName}", 6: new { controller = "EmbeddedResource", action = "Index" }, 7: new string[] { "MvcContrib.PortableAreas" }); 8:   9: context.MapRoute("ResourceImageRoute", "quicklinks/images/{resourceName}", 10: new { controller = "EmbeddedResource", action = "Index", resourcePath = "images" }, 11: new string[] { "MvcContrib.PortableAreas" }); 12:   13: context.MapRoute("quicklink", "quicklinks/{controller}/{action}", 14: new {controller = "links", action = "index"}); 15:   16: this.RegisterAreaEmbeddedResources(); 17: } 18:   19: public override string AreaName 20: { 21: get 22: { 23: return "QuickLinks"; 24: } 25: } 26: } The Quick Links portable area results in the following requests (including custom route formats): The complete code for this post is now included in the Portable Areas sample solution in the latest MvcContrib source code. You can get the latest code now.  Portable Areas open up exciting new possibilities for MVC development!

    Read the article

  • JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g

    - by John-Brown.Evans
    JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g ol{margin:0;padding:0} .c5{vertical-align:top;width:156pt;border-style:solid;border-color:#000000;border-width:1pt;padding:0pt 2pt 0pt 2pt} .c7{list-style-type:disc;margin:0;padding:0} .c4{background-color:#ffffff} .c14{color:#1155cc;text-decoration:underline} .c6{height:11pt;text-align:center} .c13{color:inherit;text-decoration:inherit} .c3{padding-left:0pt;margin-left:36pt} .c0{border-collapse:collapse} .c12{text-align:center} .c1{direction:ltr} .c8{background-color:#f3f3f3} .c2{line-height:1.0} .c11{font-style:italic} .c10{height:11pt} .c9{font-weight:bold} .title{padding-top:24pt;line-height:1.15;text-align:left;color:#000000;font-size:36pt;font-family:"Arial";font-weight:bold;padding-bottom:6pt}.subtitle{padding-top:18pt;line-height:1.15;text-align:left;color:#666666;font-style:italic;font-size:24pt;font-family:"Georgia";padding-bottom:4pt} li{color:#000000;font-size:10pt;font-family:"Arial"} p{color:#000000;font-size:10pt;margin:0;font-family:"Arial"} h1{padding-top:0pt;line-height:1.15;text-align:left;color:#666;font-size:18pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h2{padding-top:0pt;line-height:1.15;text-align:left;color:#666;font-size:14pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h3{padding-top:0pt;line-height:1.15;text-align:left;color:#666;font-size:12pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h4{padding-top:0pt;line-height:1.15;text-align:left;color:#666;font-style:italic;font-size:11pt;font-family:"Arial";padding-bottom:0pt} h5{padding-top:0pt;line-height:1.15;text-align:left;color:#666;font-size:10pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h6{padding-top:0pt;line-height:1.15;text-align:left;color:#666;font-style:italic;font-size:10pt;font-family:"Arial";padding-bottom:0pt} This example shows the steps to create a simple JMS queue in WebLogic Server 11g for testing purposes. For example, to use with the two sample programs QueueSend.java and QueueReceive.java which will be shown in later examples. Additional, detailed information on JMS can be found in the following Oracle documentation: Oracle® Fusion Middleware Configuring and Managing JMS for Oracle WebLogic Server 11g Release 1 (10.3.6) Part Number E13738-06 http://docs.oracle.com/cd/E23943_01/web.1111/e13738/toc.htm 1. Introduction and Definitions A JMS queue in Weblogic Server is associated with a number of additional resources: JMS Server A JMS server acts as a management container for resources within JMS modules. Some of its responsibilities include the maintenance of persistence and state of messages and subscribers. A JMS server is required in order to create a JMS module. JMS Module A JMS module is a definition which contains JMS resources such as queues and topics. A JMS module is required in order to create a JMS queue. Subdeployment JMS modules are targeted to one or more WLS instances or a cluster. Resources within a JMS module, such as queues and topics are also targeted to a JMS server or WLS server instances. A subdeployment is a grouping of targets. It is also known as advanced targeting. Connection Factory A connection factory is a resource that enables JMS clients to create connections to JMS destinations. JMS Queue A JMS queue (as opposed to a JMS topic) is a point-to-point destination type. A message is written to a specific queue or received from a specific queue. The objects used in this example are: Object Name Type JNDI Name TestJMSServer JMS Server TestJMSModule JMS Module TestSubDeployment Subdeployment TestConnectionFactory Connection Factory jms/TestConnectionFactory TestJMSQueue JMS Queue jms/TestJMSQueue 2. Configuration Steps The following steps are done in the WebLogic Server Console, beginning with the left-hand navigation menu. 2.1 Create a JMS Server Services > Messaging > JMS Servers Select New Name: TestJMSServer Persistent Store: (none) Target: soa_server1  (or choose an available server) Finish The JMS server should now be visible in the list with Health OK. 2.2 Create a JMS Module Services > Messaging > JMS Modules Select New Name: TestJMSModule Leave the other options empty Targets: soa_server1  (or choose the same one as the JMS server)Press Next Leave “Would you like to add resources to this JMS system module” unchecked and  press Finish . 2.3 Create a SubDeployment A subdeployment is not necessary for the JMS queue to work, but it allows you to easily target subcomponents of the JMS module to a single target or group of targets. We will use the subdeployment in this example to target the following connection factory and JMS queue to the JMS server we created earlier. Services > Messaging > JMS Modules Select TestJMSModule Select the Subdeployments  tab and New Subdeployment Name: TestSubdeployment Press Next Here you can select the target(s) for the subdeployment. You can choose either Servers (i.e. WebLogic managed servers, such as the soa_server1) or JMS Servers such as the JMS Server created earlier. As the purpose of our subdeployment in this example is to target a specific JMS server, we will choose the JMS Server option. Select the TestJMSServer created earlier Press Finish 2.4  Create a Connection Factory Services > Messaging > JMS Modules Select TestJMSModule  and press New Select Connection Factory  and Next Name: TestConnectionFactory JNDI Name: jms/TestConnectionFactory Leave the other values at default On the Targets page, select the Advanced Targeting  button and select TestSubdeployment Press Finish The connection factory should be listed on the following page with TestSubdeployment and TestJMSServer as the target. 2.5 Create a JMS Queue Services > Messaging > JMS Modules Select TestJMSModule  and press New Select Queue and Next Name: TestJMSQueueJNDI Name: jms/TestJMSQueueTemplate: NonePress Next Subdeployments: TestSubdeployment Finish The TestJMSQueue should be listed on the following page with TestSubdeployment and TestJMSServer. Confirm the resources for the TestJMSModule. Using the Domain Structure tree, navigate to soa_domain > Services > Messaging > JMS Modules then select TestJMSModule You should see the following resources The JMS queue is now complete and can be accessed using the JNDI names jms/TestConnectionFactory andjms/TestJMSQueue. In the following blog post in this series, I will show you how to write a message to this queue, using the WebLogic sample Java program QueueSend.java.

    Read the article

  • MySQL 5.5 brings in new ways to authenticate users

    - by Georgi Kodinov
    Ever wanted to use your server's OS for authenticating MySQL users ? Or the corporate LDAP repository ? Unfortunately options like the above are plentiful nowadays. And providing hard-coded support for protocol X or service Y is not the best possible idea. MySQL 5.5 has taken the step into the right direction by providing an infrastructure allowing one to make the server understand different authentication protocols by creating a set of simple plugins (one for the client and one for the server). So now you can easily extend MySQL to search for and authenticate users in your favorite user directory. In fact the API supplied is so versatile that we took the possibility to re-design the current "native" authentication mechanism into a built-in always-on plugin ! OK, let me give you an example: Imagine we have a bunch of users defined in your OS, e.g. we have a user joro with his respective password. And we have a MySQL instance running on the same computer. It would not be unexpected to need to let joro access and/or modify MySQL data. The first step is to define him as a MySQL user. And there's a problem right there : MySQL's CREATE USER joro@localhost IDENTIFIED BY 'joros_password' statement needs a password. And this is a password in no way related to the password that joro have set up in the OS. What's worse : if joro changes his OS password this will in no way be reflected in MySQL. So he'll need to change his MySQL password in a separate step. Not very convenient, specially when you have a lot of users. This is a laborious setup for joro's DBA as well : he'll have to disable his access in both MySQL and the OS should he decides that joro's out of the "nice" list. Now mysql 5.5 to the rescue: Imagine that the smart DBA has created a MySQL server plugin that will check if the name of the user logging in is a valid and enabled OS name and if the password supplied to the mysql client matches the OS and has called this plugin 'auth_os'. Now all that's left to do is to define joro as a MySQL user that will be authenticated externally. This is done by the following command : CREATE USER 'joro'@'localhost' IDENTIFIED WITH 'auth_os'; Now joro can login to MySQL using his current OS password. Note : joro is still a valid MySQL user, so you can grant privileges to him just like you would for all other users. What's better: you can have users that authenticate using different mechanisms in the same server. So you can e.g. safely experiment with external authentication for selected users while keeping your current user base operational. What happens under the hood when joro logs in ? The server will find out by the user definition that it needs to use a non-default authentication and will ask the client to "switch" to using the appropriate client-side plugin (if of course the client is not already using it). If the client can't do this (e.g. because it's an old client or doesn't have the necessary plugin available) the server will reject the login. Otherwise the server will let the server-side plugin decide (while possibly talking to the client side plugin and the OS user directory) if this is a valid login or not. If it is the login process will continue as usual, while if it's not the login will get rejected. There's a lot more that MySQL 5.5 can do for you than just the simple case above. Stay tuned for more advanced use cases like mapping groups of external users to a single MySQL user (so you won't have to have 1-to-1 mapping between your external user directory and your mysql user repository) or ways to control the process as a DBA. Or you can simply skip ahead and read the relevant topics from MySQL's excellent online documentation. Or take a look at the example plugins in plugin/auth. Or take a look at the test suite in mysql-test/t/plugin_auth.test. Changelog entry: http://dev.mysql.com/doc/refman/5.5/en/news-5-5-7.html Primary new sections: Pluggable authentication Proxy users Client plugin C API functions Revised sections: New PROXY privilege New proxies_priv grant table Passwords might be external New external_user and proxy_user system variables New --default-auth and --plugin-dir mysql options New MYSQL_DEFAULT_AUTH and MYSQL_PLUGIN_DIR options for mysql_options() CREATE USER has IDENTIFIED WITH clause to specify auth plugin GRANT has PROXY privilege, IDENTIFIED WITH clause to specify auth plugin The data structure for writing client plugins

    Read the article

< Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >