Search Results

Search found 16560 results on 663 pages for 'far'.

Page 415/663 | < Previous Page | 411 412 413 414 415 416 417 418 419 420 421 422  | Next Page >

  • SharePoint Backup/Restore without stsadm

    - by Kevin
    Due to problems we found with the restore of sites/site collections using stsadm (our tasks generated from workflows were not restored), we've taken a different route for backup/restore. We plan a major customization to our SP site and want to take a backup so we can rollback in case the install fails. In our System Testing (not production) environment, we've backed up the 12 hive, the virtual dir's that the IIS points to SharePoint, and the SharePoint databases in SQL (using SQL server to do the db backups). We have custom event handlers and workflows built with Visual Studio, and deploy the dlls to the GAC as version 2 (signed and versioned in Visual Studio). So when we deploy, the GAC will contain 2 versions of the workflows - version 1 and version 2. During the deploy we use SP stsadm features to install/activate the WF's. We also go to each library and add the new, version 2 WFs. This automatically sets the version 1 WF's to "Not Allow" new instances (which is what we want) and the version 2 as active - perfect so far. When we've completed the install, we then assume a failure and attempt to restore to the same machines (SharePoint on one server, SQL on another). We start by uninstalling the version 2 WF's from the GAC, reset IIS (to clear cache of these ver. 2 WF dlls'), restore the 12-hive and virtual directory folders, then restore the SQL dbs. This is all just as manual as you read it - no stsadm here. All seems to work after our restore, it appears the restore was successful - the mods we made to column names, data changes, etc during the install are all reverted back to the original pre-install state. With one exception. When we run a workflow, it always fails and the Logs in the 12-hive indicates the WF is still trying to use the version 2 of the dll (System.IO file not found error) We think we've backed up and restored all the moving pieces of Sharepoint but we're missing something here, does anybody have any ideas why the version 2 WF dlls are still being referenced eventhough we restored all the folders and db's of SharePoint? Thanks, Kevin

    Read the article

  • Jboss unreachable/ slow behind apache with ajp

    - by Niels
    I have an linux server running with a JBoss Instance with apache2. Apache2 will use AJP connection to reverse proxy to JBoss. I found these messages in the apache error.log: [error] (70007)The timeout specified has expired: ajp_ilink_receive() can't receive header [error] ajp_read_header: ajp_ilink_receive failed [error] (120006)APR does not understand this error code: proxy: read response failed from 8.8.8.8:8009 (hostname) [error] (111)Connection refused: proxy: AJP: attempt to connect to 8.8.8.8:8009 (hostname) failed [error] ap_proxy_connect_backend disabling worker for (hostname) [error] proxy: AJP: failed to make connection to backend: hostname [error] proxy: AJP: disabled connection for (hostname)25 I googled around but I can't seem to find any related topics. There are people say this behavior can be caused by misconfigured apache vs jboss. Telling the max amount of connections apache allows are far greater then jboss, causing the apache connection to time out. But I know the app isn't used by thousands of simultaneous connections at the time not even hundreds of connections so I don't believe this could be a cause. Does anybody have an idea? Or could tell me how to debug this problem? I'm using these versions: Debian 4.3.5-4 64Bit Apache Version 2.2.16 JBOSS Version 4.2.3.GA Thanks

    Read the article

  • Tooltips shadow stuck on desktop

    - by faulty
    I tends to get this problem from time to time. The tooltips with a shadow appearing on top of everything. It's the shadow of the tooltips not disappearing after the tooltips disappear. The last one I had the tooltips was from the wifi connection list at the systray. This problem also happen to me on another computer. Both running Win7 with ATI gpu. I found this similar post Menu command stuck on screen but none of the solution helped. In fact the "Fade or slide tooltips into view" has been unchecked from the beginning. Ending task of "dwm.exe" also doesn't help. So far the only way to resolve this by restarting window. I can't post picture yet, so can't show any screenshot. Edit: Just tested a few more trick which doesn't work. 1. Turn of aero 2. Hibernate 3. Switch main display to external display and switch back. 4. Change resolution

    Read the article

  • is it worth to use load balancer on web server/website

    - by user427969
    I have a website and a while ago, the web server of the company hosting my website was down for about a day. I consulted the company for a solution on how i can stop this from happening in future and they suggested to have a second machine and which will be connected to my current website/web server by a "load balancer" (at an additional huge cost!!!). The second machine will be replicate of the first one and so if i goes down, the other will always be running. ---- Explanation ----- My hosting company suggested that it will be a good idea to have a second machine running at the same time and both the machines will be connected by a load balancer which reduces the rist of a downtime. The second machine will be a mirror of the first and any changes to first must be replicated in the second. I don't mind spending money if it really saves my website from going down. I want to know is it worth having this "load balancer" for my purpose? My website is a 24/7 service. I cannot afford an outage of 24 hours/1 hour. I don't mind using this "load balancer" as far as it is really worth. I am not sure if its just a marketing trick of my hosting company or really a "best" solution Thanks for help. Regards

    Read the article

  • Why does ATI 5570 HD video card driver installation cause Windows 7 To Blue Screen?

    - by Mort
    This one is for the hive mind. I have a brand new Dell Optiplex 760 workstation with 4 gigabytes of RAM running Windows 7 Professional (32bit). This is a new box with nothing installed other than what was provided for directly by Dell. I installed a Saphire ATI PCI Express 5570 HD. Upon trying to install the 10.4 Catalyst drivers the system will blue screen. It blue screens during the hardware detection phase of the installation process. I have already performed the following trouble shooting steps: Changed system RAM Installed only 2 gigabytes of RAM Installed different versions of Catalyst drivers (10.4 - 9.12) Tried to install video only component of driver (vs entire Catalyst suite) Made sure Windows 7 was fully updated Flashed mother board BIOS to current version Removed and re-seated video card Contacted ATI Support (We all know how this went......) Verified supply outputting properly The blue screen error (via Windows BugCheck entry in event log) is a 0x000000CA and refers to a plug and play error most likely caused by a bad driver. The problem is that the driver installation process never gets far enough to actually install a driver. The resolution center in Windows provides a solution of installing the 10.4 Catalyst driver to resolve issue (which fails). Looking for some alternate views to resolve.

    Read the article

  • Microsoft Security Essentials & MsMpEng.exe hogging resources

    - by Mike
    I've been using MSE for a couple months now, never had a single problem. All of a sudden the process "MsMpEng.exe" will randomly go crazy and hog all my system resources so I can't do anything unless I kill it in the task manager. (I've quit the program for now and my comp is running smooth). When I restart the program, reboot, whatever, it goes off and hogs all the resources again after a couple minutes. If I kill the process it will go away and then come back a couple minutes later and do the same thing. I've scanned with MSE, another antivirus and malware with no probs. Any ideas? Should I uninstall and find something else? The thing is I've liked it so far. I'm running Win7 64-bit. Also, I'm not running any other conflicting security programs. This is the only one on my PC right now. Windows Defender is also off.

    Read the article

  • Windows Server Hyper-V guests cannot see each other on network

    - by Noldorin
    I have a Hyper-V physical machine along with two standard laptops running within my LAN (connected by an ASUS-RT56U router). The physical server runs Windows Hyper-V Server 2008 R2, with two Windows Server 2008 R2 (full) guest VMs installed and running within. Both laptops run Windows 7. All OSs are 64-bit. Opening up Network in Windows Explorer on either of the two laptops displays both of the laptops in the LAN fine. However, neither of the guest VMs on the server (nor the host itself) are displayed. Indeed, the guest VMs can not see each other in Network view either. I can ping all computers (laptops and servers) without problems from within the LAN, but all of the servers are simply not visible from anywhere. In addition, the Network Map screen (accessible via Network and Sharing centre) gives me an error message: "An error happened during the mapping process." And I'm suspecting this might have something to do with how LLTP (Link Layer Topology Protocol) is working on the network. Worth noting though is that before my server was on the network, the Network Map screen displayed fine (as far as I can remember).

    Read the article

  • Windows Server Hyper-V guests cannot see each other on network

    - by Noldorin
    I have a Hyper-V physical machine along with two standard laptops running within my LAN (connected by an ASUS-RT56U router). The physical server runs Windows Hyper-V Server 2008 R2, with two Windows Server 2008 R2 (full) guest VMs installed and running within. Both laptops run Windows 7. All OSs are 64-bit. Opening up Network in Windows Explorer on either of the two laptops displays both of the laptops in the LAN fine. However, neither of the guest VMs on the server (nor the host itself) are displayed. Indeed, the guest VMs can not see each other in Network view either. I can ping all computers (laptops and servers) without problems from within the LAN, but all of the servers are simply not visible from anywhere. In addition, the Network Map screen (accessible via Network and Sharing centre) gives me an error message: "An error happened during the mapping process." And I'm suspecting this might have something to do with how LLTP (Link Layer Topology Protocol) is working on the network. Worth noting though is that before my server was on the network, the Network Map screen displayed fine (as far as I can remember).

    Read the article

  • Remote server security: handling compiler tools

    - by Gonzolas
    Hello! I was wondering wether to remove compiler tools (gcc, make, ...) from a remote production server, mainly for security purposes. Background: The server runs a web application on Linux. Consider Apache jailed. Otherwise, only OpenSSHd faces the public network. Of course there is no compiler stuff within the jail, so this is about the actual OS outside of any jails. Here's my personal PRO/CON list (regarding removal) so far: PRO: I had been reading some suggestions to remove compiler tools in order inhibit custom building of trojans etc. from within the host if an attacker attains unpriviliged user permissions. CON: I can't live without Perl/Python and a trojan/whatever could be written in a scripting language like that, anyway, so why bother about removing gcc et al. at all. There is a need to build new Linux kernels as well as some security tools from source directly on the server, because the server runs in 64-bits mode and (to my understanding) I can't (cross-)compile locally/elsewhere due to lack of another 64-bits hardware system. OK, so here are my questions for you: (a) Is my PRO/CON assessment correct? (b) Do you know of other PROs / CONs to removing all compiler tools? Do they weigh in more? (c) Which binaries should I consider dangerous if the given PRO statement holds? Only gcc, or also make, or what else? Should I remove the enitre software packages them come with? (d) Is it OK to just move those binaries to a root-only accessible directory when they are not needed? Or is there a gain in security if I "scp them in" every time? Thank you!

    Read the article

  • Apple video adapters: Mini-DVI to DVI + DVI to video?

    - by Ken
    I have: a Macbook (which has Mini-DVI output) an Apple Mini-DVI to DVI adapter (so I can hook up my Macbook to my DVI LCD) a Mac mini (which has DVI output) I see that I can get a DVI-to-Video (s-video and composite) adapter from Apple that will let me hook up my Mac mini to my TV set (which has only component, s-video, and composite -- nothing digital). So far so good. Question: Will that same adapter also let me hook my Macbook to my TV? That is, can I hook Macbook - Mini-DVI to DVI - DVI to video - TV set, and see picture? I know there are digital/analog/integrated variants of DVI, and it's not at all clear to me what pins these things have and what signals they're sending. One website I found suggested that it would work, and another suggested that it wouldn't even physically connect. So I'm looking for, ideally, someone who's actually tried it, or has these adapters sitting around to try. I know I can buy 2 adapters (Mini-DVI to Video, and DVI to Video) to do this, but at $20 a pop I'll avoid that if at all possible.

    Read the article

  • SQL Server Windows-only Authentication Strategy problem

    - by Mike Thien
    I would like to use Windows-only Authentication in SQL Server for our web applications. In the past we've always created the all powerful 1 SQL Login for the web application. After doing some initial testing we've decided to create Windows Active Directory groups that mimic the security roles of the application (i.e. Administrators, Managers, Users/Operators, etc...) We've created mapped logins in SQL Server to these groups and given them access to the database for the application. In addition, we've created SQL Server database roles and assigned each group the appropriate role. This is working great. My issue revolves around that for most of the applications, everyone in the company should have read access to the reports (and hence the data). As far as I can tell, I have 2 options: 1) Create a read-only/viewer AD group and put everyone in it. 2) Use the "domain\domain users" group(s) and assign them the correct roles in SQL. What is the best and/or easiest way to allow everyone read access to specific database objects using a Windows-only Authentication method?

    Read the article

  • Strange corruption saving from Textpad 5 within Windows 7-64 VirtualBox VM to shared folder with Mac host

    - by joelarson
    I have a fairly new Window-7 64bit install running in Virtual Box on a MacBook Pro. I'm using TextPad 5 within that environment to edit source files that live on a shared folder that is on the Mac Host. When I save some of these source files, the saved file ends up with some amount of the end of the file repeated one or more times. For example, a file that has this at the end: ... return ttp; }; would, once saved, open up with: ... return ttp; }; }; It is definitely a problem with how the file gets written as opposed to how it's read, because I can see this now matter what app I use to open the file with (NotePad & Word in Windows 7, TextWrangler back in the Mac). I've tried saving as ANSI and UTF-8, and with or without the 'Write Unicode and UTF-8 BOM' checked in TextPad preferences. It doesn't happen with all files though I can't see any pattern about which files do or don't have the problem. It doesn't happen with files written to the Windows 7 c:\ drive. And so far it doesn't happen from other applications saving files, only TextPad. Any ideas? My versions: Textpad 5.4.2 Windows 7 Professional 64-bit, fully up to date VirtualBox 4.0.8 r71778 OSX 10.6.7

    Read the article

  • Changing the interface language in Windows 7 Home Premium

    - by Cristián Romo
    A friend of mine has recently purchased a laptop in the U.S. that has Windows 7 Home Premium on it with an English interface. Not being a native English speaker, I'm trying to change the interface language to traditional Chinese. I've looked through the Control Panel in search of something that might let me change the interface language. Naturally, I looked at the Region and Language section and managed to change the formats the computer uses and install a working keyboard, but I haven't found a way to change the interface language. Upon doing some research, I found out that there are two kinds of interface packs, Multilingual User Interface (MUI) and Language Interface Packs (LIP). It seems that MUIs can only be installed through Windows Update, so I looked through the list of updates. To my dismay, the language packs are not present. The optional updates tab doesn't even show up. Many sites show a drop down menu the under Keyboards and Languages tab in the Region and Language options, yet it doesn't show up for me. We also don't have the Windows 7 DVD which might contain this useful file. As far as the LIPs go, I can't find one in Chinese at all, let alone traditional Chinese. Can the interface language be changed in Home Premium at all? If it can, how would I do so?

    Read the article

  • New virtualization project and old SAN

    - by Chris
    Hi, We'll start shortly a partial virtualization of our infrastructure and consolidate a dozen servers into virtuals instances. We'll also add some client application virtualization into the mix for good measure. Two HP DL 380 with the new xeons 56xx and 96 GB of memory each running xenserver + xenapp will then take charge of most of our IT needs. So far, so good. One element that is missing from the picture is the storage part. We need some sort of shared storage to enable live motion and other HA features. We have an IBM DS 4300 SAN that we can use for that. But since it's in production since 2005, I'm not sure about such a critical role for a 5yr old part. So my question is: What is the reliability of this kind of equipment after 5 yr ? Can it last 10 yr with no or few problems ? Since our budjet is tight, not buying another SAN will be a big plus. This lead me to another question: FC disks cost an arm and a leg from IBM. When I type the replacement # in google (for example IBM 300GB 15K 4GBPS FC HDD 42D0410), I can find it at a fraction of the price at various sites. So am I stupid to buy from IBM or naive to trust 3rd party reseller ?? Thanks, Chris

    Read the article

  • LogMeIn Hamachi for Linux

    - by tlunter
    So far most of my work using LogMeIn Hamachi has been from either a Mac OS X or Windows system to Windows or a Linux Computer. Recently I purchased a mini computer and have been running Ubuntu Server on it, as my little server. I knew LogMeIn had a Linux client that is command line only, but I often do all my work via command line anyway, so that wasn't an issue. I added my user to the correct local file so that I could run the hamachi daemon without sudo, and was able to connect to LogMeIn's service. I decided to set up my Linux server as a git server as well, and set it up correctly. The thing is, the server is behind my schools firewall and I need to use hamachi to get around that. Since most of the time I was using either Mac or Windows, I never had an issue sshing onto any of my computers since LogMeIn is fully featured for these OSs. From Linux (Arch) though, it seems like the client cannot correctly route to the LogMeIn IPs. I know from Windows I can connect to the Linux computers, both of them. From Linux (Arch) though, I can't connect to my Mac, Windows, or Linux server. It keeps just dropping the connection. I was wondering if there was some configuration that I would need to make for this to work. I understand that it is most likely going to be a static configuration since I assume it has to do with the computer not understanding that 5.*.*.* actually refers to another IP:Port. Has anyone had any experience getting this to work?

    Read the article

  • Copying symbolic links and filenames with special characters to NAS

    - by Mr E
    I have a new Western Digital My Book Live NAS. I am trying to copy files from an old drive to the NAS. I'm using Ubuntu 12.04 and I've mounted the drive by browsing the network in Nautilus and choosing a shared folder configured on the NAS. The shared folder is then automatically mounted at .gvfs/files on mybooklive. There are two problems so far: File names and directory names containing certain characters (e.g. : or |). Attempting to copy these results in the error message: cp: cannot stat `/path/to/destination.filename': Invalid argument Symbolic links. In Nautilus I get the error message: Symlinks not supported by backend My questions are: Can I connect to the NAS or configure the NAS so that I can copy my files without this problem? (In case it matters, I don't need Windows compatibility.) If not, what can I do to identify all the problem files? Can I do anything to automatically fix my filenames Please let me know if any of this needs clarification. I'm not too familiar with all of this so I may have left out some useful information.

    Read the article

  • Apache2, Tomcat6, and proxy redirects

    - by Randal Hale
    So here is my question - go easy and slow. I'm a GIS Consultant and general hack with linux. I inherited this volunteer job essentially because I knew more than the rest of the team - or the rest of the team isn't as stubborn as I am... With that said a number of people have been mucking around in the server before I got involved so I've been cleaning up a lot of things. The domain names have been changed to protect the innocent. I have a server running Apache2 (port 80) and tomcat6 (8080) running on ubuntu server 10.4. There is a virtual host on Apache2 called "Runner" (the domain is runner.org). I have mod_proxy loaded. I am trying to redirect everyone that visits runner.org to http://some.ip.address:8080/openrunner-webapp/ So far I've gotten runner.org assigned to the apache2 server. Someone set up a redirect in the httpd.conf file but I believe it needs to go into the virtualhost. I tried setting the redirect in the virtualhost as: *ProxyPass / http://localhost:8080/openrunner-webapp All that does is show me the root of the Apache webserver. Anyway I'm stuck

    Read the article

  • Ubuntu 12.04 glusterfs volume failed to mount at boot time

    - by user183394
    I have just setup 7 KVM guests, all running Ubuntu 12.04 LTS 64bit Minimal server to test out glusterfs 3.2.5 from the Ubuntu official repo. Two of them form a mirrored pair (i.e. replica 2), and five of them are clients. I am still new to this file system and would like to gain some "hands-on" experience. The setup was mostly uneventful, until I put in the following into each glusterfs client's /etc/fstab: 192.168.122.120:/testvol /var/local/testvol glusterfs defaults,_netdev 0 0, where 192.168.122.120 is the IP address of the first "glusterfs server". If I issue either a manaul mountall or a mount.glusterfs 192.168.122.120:/testvol /var/local/testvol on CLI, a mount shows that the volume is successfully imported. But once a client is rebooted, after it comes back up, the volume is not mounted! I searched the Internet, and found this article, but since I am not running both client and server on the same node, IMHO it's not strictly applicable. So, as a kludgy "get-around", I put in a sleep 3 && mount.glusterfs 192.168.122.120:/testvol /var/local/testvol into each client node's /etc/rc.local. It seems to be able to get the volume mounted on each node, as far as I can tell. But this is quite ugly, and I would appreciate a hint as to how to resolve this glusterfs-non-boot-time-mounting issue correctly. Note that I used the IP address of the first "glusterfs server" although the /etc/hosts of all nodes have been populated with their hostnames. I figured that the use of IP address is more robust. --Zack

    Read the article

  • apache permission errors

    - by Wilduck
    I'm trying to set up Apache on a arch-linux box as a testing environment (I'm only using the localhost, not trying to serve anything to the greater web). When setting up Django with mod_wsgi, it recommended that I set up a WSGIScriptAlias from / to /usr/local/django/mysite/apache/django.wsgi . I've done this, as well as added the /usr/.../apache directory to my httpd.conf. When I try to access http://localhost I get a 403 forbidden error. I have no idea why this is happening. Things I've tried so far: 1) chown -R http .../apache 2) chmod -R 777 .../apache 3) using a simple Alias directive to host a static file from that directory. None of these have worked. I'm at a loss for what I'm doing wrong. Below is a relevant excerpt from my httpd.conf: Alias / /usr/local/django/mysite/apache <Directory "/usr/local/django/mysite/apache"> Order deny,allow Allow from all </Directory> So my question is: what am I doing wrong?

    Read the article

  • Creating Windows 8.1 system image error

    - by Random
    I'm experiencing "not enough space" error when trying to create system image to a USB hard drive: Detailed error: ERROR - A Volume Shadow Copy Service operation error has occurred: (0x8004231f) Insufficient storage available to create either the shadow copy storage file or other shadow copy data. Blah, blah... There is not enough disk space to create the volume shadow copy on the storage location. Make sure that, for all volumes to be backup up, the minimum required disk space for shadow copy creation is available. This applies to both the backup storage destination and volumes included in the backup. Minimum requirement: For volumes less than 500 megabytes, the minimum is 50 megabytes of free space. For volumes more than 500 megabytes, the minimum is 320 megabytes of free space. Recommended: At least 1 gigabyte of free disk space on each volume if volume size is more than 1 gigabyte. ERROR - A Volume Shadow Copy Service operation error has occurred: (0x8004231f) Insufficient storage available to create either the shadow copy storage file or other shadow copy data. I'd tried both - PowerShell wbAdmin start backup -backupTarget:E: -include:C: -allCritical -quiet and via Control Panel - File History button Clearly both EFI and Windows Recovery Environment partitions don't meet requirements coming from System Image tool (pic below) On top of that all system partitions are now shown as 100% free in Disk Management, it's disturbing but far from the actual state. My question is - hot to create System Image in Windows 8.1?

    Read the article

  • PDFs and Networked Printers

    - by Bart Silverstrim
    Weird issue. We have users printing to networked windows-shared printers (print server Win2003 sp2). Some users have been reporting recently that they can't print PDF documents to particular printers (two example printers are HP 2430 PCL 6 driver and 4250 PCL 6 driver). At first, we found that on many of these systems the "Everyone" object was added to the permissions for the root of the C: volume but had no permissions checked. We added modify privileges to it (these are Deep-Freeze systems, so modifications to these systems that we don't add as administrators won't matter) and they seemed to be able to print. Perhaps Acrobat Reader was writing a temp file for printing where users didn't have permission, we surmised, and made the change and moved on. Yesterday the user called in saying it's not working still. Looked at it; bring up a PDF, click Print and the reader app says that you have to install a printer. Look at the printers folder (Windows XP workstation), and it has printers installed. Print a test page, return to AcroReader, and it will print fine to that printer. The whole time web pages, MS Office documents, etc. print without issue to the same printers. Has anyone seen this issue with Acro Reader 9 and certain network printer drivers or shares involving HP printers? I'd post this to SuperUser but it seems to be associated with a networked printer issue, seems to affect subsets of users but may be more widespread and our users aren't reporting it to us assuming we just know about it, and I've not found rhyme or reason as to why it's affecting just PDF printing and particular printers. The print spoolers are all running on the workstations and print server without errors being logged so far, but I'm going through the logs now to see if I can find anything out of place.

    Read the article

  • Any chance to extract Windows from recovery DVDs

    - by Pekka
    I have an Acer Tablet PC that came with WIndows XP Tablet PC Edition in the form of three recovery DVDs. Sadly, a mainboard fault put the machine out of business. I have now bought a used one from a different manufacturer that comes without an operating system. The recovery DVDs seem to contain three parts of a Norton Ghost image, and nothing else. The recovery DVD won't even start on a Non-Acer system. I'm a bit miffed because I legally own a Windows XP Tablet PC Edition license that I now can't use on the original machine any more. As far as I know, it's not legal in my jurisdiction for them to bind the license to a certain machine. I want to continue using the operating system on the new machine. Is there any chance of extracting usable Windows XP installation files from that image? How are such image files usually made up? Is there any free software around that can read Norton Ghost images so I can take a peek myself?

    Read the article

  • Excel data into PowerPoint slides

    - by nqw1
    I have already found some helpful sites but I'm still unable to do what I want. My Excel file contains few columns and multiple rows. All the data from one row would be in one slide but data from different cells in that one row should go to a specific elements in PP slide. At first, is it possible to export data from an Excel cell into a specific text box in PP? For example, I would like to have all data from the first column of each row go to a Text box 1. Let's say I have 100 rows so I would have 100 slides and each slide would have Text bow 1 with correct data. Text box of slide 66 would have data from the first column of row 66. Then all data from the second column of each row would go to a text bow 2 and so on. I tried to do some macros with bad success. I also tried to use Word outlines and export them into PP (New slide - Slides from Outline) but there seems to be a bug since I got 250 pages of gibberish. I had only two paragraphs and both had one word. First paragraph used Heading 1 style and second paragraph used Normal style. Sites what I have found, use VB and/or some other programming language to create slides from Excel sheets. I have tried to add those VB codes into my macros but none of them hasn't worked so far. Probably I just don't know how to use them correctly :) Here's some helpful sites: VBA: Create PowerPoint Slide for Each Row in Excel Workbook Creating a Presentation Report Based on Data Question in Stackoverflow I use Office 2011 on Mac. Any help would be appreciated!

    Read the article

  • needing storage integrity (write/read) test - for BASH

    - by Mr. Bash
    In need of shell scripts / bash commands to verify data integrity of local harddrives, usb-drives, etc, ... Like the famous www.heise.de/download/h2testw; or something that is at least common within repositories. (h2testw writes a specific datastring over and over onto the medium, then reads it again to verify if it was written correctly and displays write/read time/speed.) please no dd if=/dev/random of=/dev/sdx bs=1k && dd if=/dev/sdx of=/dev/null bs=1k since it won't verify if everything was written correctly. It is only a test if read/write is successful to the device. So far, I'm not too happy with badblocks -w -v /dev/sdx1 either, since it seems rather slow and I don't know what it exactly writes, and if it considers wear-leveling on flash media. There is also a program named F3 http://oss.digirati.com.br/f3/ that needs to be compiled. Designed after h2testw, the concept sounds interesting, i'd just rather have it as a ready to go bash script.

    Read the article

  • Get Python to raise MemoryError instead of eating all my disk space

    - by asmeurer
    If I run a Python program with a memory leak, I would normally expect the program to eventually die with MemoryError. But instead, what happens is that all the virtual memory is used until my disk runs out of space. I am running Mac OS X 10.8 on a retina MacBook Pro. My computer generally has between 10GB to 20GB free. Mac OS X is smart enough to not die completely when the disk runs out of space (rather, it gives me a dialog letting me force quit my GUI programs). Is there a way to make Python just die when it runs out of real memory, or some reasonable amount of virtual memory? This is what happens on Linux, as far as I can tell. I guess Mac OS X is more generous than Linux with virtual memory (the fact that I have an SSD might be part of this; I don't know just how smart OS X is with this stuff). Maybe there's a way to tell the Mac OS X kernel to never use so much virtual memory that leaves less than, say, 5 GB free on the hard drive?

    Read the article

< Previous Page | 411 412 413 414 415 416 417 418 419 420 421 422  | Next Page >