Search Results

Search found 4099 results on 164 pages for 'bulk export'.

Page 133/164 | < Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >

  • Migrating Windows XP BOOT.INI Settings to Windows 7 Boot-loader

    - by Synetech inc.
    Hi, Two months ago my motherboard died, so I bought a used computer that came with Windows 7. I have since installed my old hard-drive, which had Windows XP on it, in this system. What I am trying to do now is to figure out a way to migrate the settings from XP's BOOT.INI into 7's boot-loader. Below is the BOOT.INI I used in XP (I have reduced the strings and updated the disks to point to the new location of the old HD. Oh and I am not clear on the drive letters. In XP, I could boot the recovery console or MS-DOS from a file in C:\ that contains the boot-sector. I am not sure what drive letter it would be called now—I had to manually change all the drive letters of the old partitions in Windows 7 because it auto-assigned them all wrong/differently). [boot loader] timeout=10 default=multi(0)disk(0)rdisk(1)partition(1)\WINDOWS [operating systems] multi(0)disk(0)rdisk(1)partition(1)\WINDOWS="XP" /fastdetect multi(0)disk(0)rdisk(1)partition(1)\WINDOWS="XP (Safe)" /safeboot:network /sos /bootlog /noguiboot C:\CMDCONS\BOOTSECT.DAT="Recovery Console" /cmdcons C:\BOOTSECT.DOS="MS-DOS 7.10" /win95 I have looked around, and have only been able to find some bcdedit commands to add XP to the boot-loader, but none that include information on setting safe-mode for it (or changing any of the XP load options for that matter). Not surprisingly I suppose, I have not found anything on adding the XP recovery console or DOS to the Windows 7 boot-loader. (Yes, I tried EasyBCD, but that did not help; it had no options for XP, and the best I managed was to get a choice of booting 7 or normal-mode XP—choosing XP didn't even give the old XP boot menu.) Can anyone please tell me how to export the entries in XP's boot.ini to 7's boot-loader so that on boot, I can choose to load the following: Windows 7 Windows 7 (Safe-mode) (Windows 7 (The Win7 counterpart of the Recovery Console)) Windows XP Windows XP (Safe-mode) Windows XP (Recovery Console) MS-DOS 7.10

    Read the article

  • Querying Visual Studio project files using T-SQL and Powershell

    - by jamiet
    Earlier today I had a need to get some information out of a Visual Studio project file and in this blog post I’m going to share a couple of ways of going about that because I’m pretty sure I won’t be the only person that ever wants to do this. The specific problem I was trying to solve was finding out how many objects in my database project (i.e. in my .dbproj file) had any warnings suppressed but the techniques discussed below will work pretty well for any Visual Studio project file because every such file is simply an XML document, hence it can be queried by anything that can query XML documents. Ever heard the phrase “when all you’ve got is hammer everything looks like a nail”? Well that’s me with querying stuff – if I can write SQL then I’m writing SQL. Here’s a little noddy database project I put together for demo purposes: Two views and a stored procedure, nothing fancy. I suppressed warnings for [View1] & [Procedure1] and hence the pertinent part my project file looks like this:   <ItemGroup>    <Build Include="Schema Objects\Schemas\dbo\Views\View1.view.sql">      <SubType>Code</SubType>      <SuppressWarnings>4151,3276</SuppressWarnings>    </Build>    <Build Include="Schema Objects\Schemas\dbo\Views\View2.view.sql">      <SubType>Code</SubType>    </Build>    <Build Include="Schema Objects\Schemas\dbo\Programmability\Stored Procedures\Procedure1.proc.sql">      <SubType>Code</SubType>      <SuppressWarnings>4151</SuppressWarnings>    </Build>  </ItemGroup>  <ItemGroup> Note the <SuppressWarnings> elements – those are the bits of information that I am after. With a lot of help from folks on the SQL Server XML forum  I came up with the following query that nailed what I was after. It reads the contents of the .dbproj file into a variable of type XML and then shreds it using T-SQL’s XML data type methods: DECLARE @xml XML; SELECT @xml = CAST(pkgblob.BulkColumn AS XML) FROM   OPENROWSET(BULK 'C:\temp\QueryingProjectFileDemo\QueryingProjectFileDemo.dbproj' -- <-Change this path!                    ,single_blob) AS pkgblob                    ;WITH XMLNAMESPACES( 'http://schemas.microsoft.com/developer/msbuild/2003' AS ns) SELECT  REVERSE(SUBSTRING(REVERSE(ObjectPath),0,CHARINDEX('\',REVERSE(ObjectPath)))) AS [ObjectName]        ,[SuppressedWarnings] FROM   (        SELECT  build.query('.') AS [_node]        ,       build.value('ns:SuppressWarnings[1]','nvarchar(100)') AS [SuppressedWarnings]        ,       build.value('@Include','nvarchar(1000)') AS [ObjectPath]        FROM    @xml.nodes('//ns:Build[ns:SuppressWarnings]') AS R(build)        )q And here’s the output: And that’s it – an easy way of discovering which warnings have been suppressed and for which objects in your database projects. I won’t bother going over the code as it is fairly self-explanatory – peruse it at your leisure.   Once I had the SQL above I figured I’d share it around a little in case it was ever useful to anyone else; hence I’m writing this blog post and I also posted it on the Visual Studio Database Development Tools forum at FYI: Discover which objects have had warnings suppressed. Luckily Kevin Goode saw the thread and he posted a different solution to the same problem, one that uses Powershell. The advantage of Kevin’s Powershell approach is that it is easy to analyse many .dbproj files at the same time. Below is Kevin’s code which I have tweaked ever so slightly so that it produces the same results as my SQL script (I just want any object that had had a warning suppressed whereas Kevin was querying specifically for warning 4151):   cd 'C:\Temp\QueryingProjectFileDemo\' cls $projects = ls -r -i *.dbproj Foreach($project in $projects) { $xml = new-object System.Xml.XmlDocument $xml.set_PreserveWhiteSpace( $true ) $xml.Load($project) #$xpath = @{Start="/e:Project/e:ItemGroup/e:Build[e:SuppressWarnings=4151]/@Include"} #$xpath = @{Start="/e:Project/e:ItemGroup/e:Build[contains(e:SuppressWarnings,'4151')]/@Include"} $xpath = @{Start="/e:Project/e:ItemGroup/e:Build[e:SuppressWarnings]/@Include"} $ns = @{ e = "http://schemas.microsoft.com/developer/msbuild/2003" } $xml | Select-Xml -XPath $xpath.Start -Namespace $ns |Select -Expand Node | Select -expand Value } and here’s the output: Nice reusable Powershell and SQL scripts – not bad for an evening’s work. Thank you to Kevin for allowing me to share his code. Don’t forget that these techniques can easily be adapted to query any Visual Studio project file, they’re only XML documents after all! Doubtless many people out there already have code for doing this but nonetheless here is another offering to the great script library in the sky. Have fun! @Jamiet

    Read the article

  • Migrating Windows 2003 File Server Cluster to Windows 2008 R2 Standalone?

    - by Tatas
    We have a situation where we have an aging Windows 2003 File Server Cluster that we'd like to move to a standalone Windows Server 2008 R2 VM that resides in our Hyper-V R2 installation. We see no need to keep the Clustering as Hyper-V is now providing our Failover/Redundancy. Usually, in a standalone file server migration we migrate the data, preserving NTFS permissions and then export the sharing permissions from the registry and import them on the new server. This does not appear possible in this instance, as the 2003 cluster stores the sharing permissions quite differently. My question is, how would one perform this type of migration? Is it even possible? My current lead is the File Server Migration Toolkit, however I can find no information on the net about migrating from cluster to standalone, only the opposite. Please help. UPDATE: We ended up getting the data copied over (permissions intact), but had to recreate the shares manually by hand. It was a bit of a pain but it did in the end work out.

    Read the article

  • Talend Enterprise Data Integration overperforms on Oracle SPARC T4

    - by Amir Javanshir
    The SPARC T microprocessor, released in 2005 by Sun Microsystems, and now continued at Oracle, has a good track record in parallel execution and multi-threaded performance. However it was less suited for pure single-threaded workloads. The new SPARC T4 processor is now filling that gap by offering a 5x better single-thread performance over previous generations. Following our long-term relationship with Talend, a fast growing ISV positioned by Gartner in the “Visionaries” quadrant of the “Magic Quadrant for Data Integration Tools”, we decided to test some of their integration components with the T4 chip, more precisely on a T4-1 system, in order to verify first hand if this new processor stands up to its promises. Several tests were performed, mainly focused on: Single-thread performance of the new SPARC T4 processor compared to an older SPARC T2+ processor Overall throughput of the SPARC T4-1 server using multiple threads The tests consisted in reading large amounts of data --ten's of gigabytes--, processing and writing them back to a file or an Oracle 11gR2 database table. They are CPU, memory and IO bound tests. Given the main focus of this project --CPU performance--, bottlenecks were removed as much as possible on the memory and IO sub-systems. When possible, the data to process was put into the ZFS filesystem cache, for instance. Also, two external storage devices were directly attached to the servers under test, each one divided in two ZFS pools for read and write operations. Multi-thread: Testing throughput on the Oracle T4-1 The tests were performed with different number of simultaneous threads (1, 2, 4, 8, 12, 16, 32, 48 and 64) and using different storage devices: Flash, Fibre Channel storage, two stripped internal disks and one single internal disk. All storage devices used ZFS as filesystem and volume management. Each thread read a dedicated 1GB-large file containing 12.5M lines with the following structure: customerID;FirstName;LastName;StreetAddress;City;State;Zip;Cust_Status;Since_DT;Status_DT 1;Ronald;Reagan;South Highway;Santa Fe;Montana;98756;A;04-06-2006;09-08-2008 2;Theodore;Roosevelt;Timberlane Drive;Columbus;Louisiana;75677;A;10-05-2009;27-05-2008 3;Andrew;Madison;S Rustle St;Santa Fe;Arkansas;75677;A;29-04-2005;09-02-2008 4;Dwight;Adams;South Roosevelt Drive;Baton Rouge;Vermont;75677;A;15-02-2004;26-01-2007 […] The following graphs present the results of our tests: Unsurprisingly up to 16 threads, all files fit in the ZFS cache a.k.a L2ARC : once the cache is hot there is no performance difference depending on the underlying storage. From 16 threads upwards however, it is clear that IO becomes a bottleneck, having a good IO subsystem is thus key. Single-disk performance collapses whereas the Sun F5100 and ST6180 arrays allow the T4-1 to scale quite seamlessly. From 32 to 64 threads, the performance is almost constant with just a slow decline. For the database load tests, only the best IO configuration --using external storage devices-- were used, hosting the Oracle table spaces and redo log files. Using the Sun Storage F5100 array allows the T4-1 server to scale up to 48 parallel JVM processes before saturating the CPU. The final result is a staggering 646K lines per second insertion in an Oracle table using 48 parallel threads. Single-thread: Testing the single thread performance Seven different tests were performed on both servers. Given the fact that only one thread, thus one file was read, no IO bottleneck was involved, all data being served from the ZFS cache. Read File ? Filter ? Write File: Read file, filter data, write the filtered data in a new file. The filter is set on the “Status” column: only lines with status set to “A” are selected. This limits each output file to about 500 MB. Read File ? Load Database Table: Read file, insert into a single Oracle table. Average: Read file, compute the average of a numeric column, write the result in a new file. Division & Square Root: Read file, perform a division and square root on a numeric column, write the result data in a new file. Oracle DB Dump: Dump the content of an Oracle table (12.5M rows) into a CSV file. Transform: Read file, transform, write the result data in a new file. The transformations applied are: set the address column to upper case and add an extra column at the end, which is the concatenation of two columns. Sort: Read file, sort a numeric and alpha numeric column, write the result data in a new file. The following table and graph present the final results of the tests: Throughput unit is thousand lines per second processed (K lines/second). Improvement is the % of improvement between the T5140 and T4-1. Test T4-1 (Time s.) T5140 (Time s.) Improvement T4-1 (Throughput) T5140 (Throughput) Read/Filter/Write 125 806 645% 100 16 Read/Load Database 195 1111 570% 64 11 Average 96 557 580% 130 22 Division & Square Root 161 1054 655% 78 12 Oracle DB Dump 164 945 576% 76 13 Transform 159 1124 707% 79 11 Sort 251 1336 532% 50 9 The improvement of single-thread performance is quite dramatic: depending on the tests, the T4 is between 5.4 to 7 times faster than the T2+. It seems clear that the SPARC T4 processor has gone a long way filling the gap in single-thread performance, without sacrifying the multi-threaded capability as it still shows a very impressive scaling on heavy-duty multi-threaded jobs. Finally, as always at Oracle ISV Engineering, we are happy to help our ISV partners test their own applications on our platforms, so don't hesitate to contact us and let's see what the SPARC T4-based systems can do for your application! "As describe in this benchmark, Talend Enterprise Data Integration has overperformed on T4. I was generally happy to see that the T4 gave scaling opportunities for many scenarios like complex aggregations. Row by row insertion in Oracle DB is faster with more than 650,000 rows per seconds without using any bulk Oracle capabilities !" Cedric Carbone, Talend CTO.

    Read the article

  • Linux Mint reset display resolution from console

    - by wullxz
    I have a Linux Mint 13 Xfce in a VMware Workstation 8 VM and set the resolution from 800x600 to 1280x768 and now I get permanently logged out when I try to login. I knew how to get back to my old resolution back in the xorg.conf days but Linux Mint now uses xrandr which won't display any displays when running # xrandr because X is not running (of course not - I can't login over GUI). I know that there are configuration files in /etc/X11/Xsession.d/ because I configured a debian based thinclient's resolution in a file called /etc/X11/Xsession.d/91configure_display but that file doesn't exist in my Linux Mint VM. So, how do I reset my X screen resolution from console? Edit: I forgot to tell you that I can't change resolution in console: # xrandr -s 800x600 Can't open display This message appears every time I use xrandr or xrandr -s *resolution* Update: I tried what bWowk suggested: # export DISPLAY=:0.0 # xrandr -s 800x600 No protocol specified No protocol specified Can't open display :0.0 So, that doesn't work either. Isn't there a configuration file that is executed every time X starts? X is running btw - ps aux | grep X shows one process /usr/bin/X running.

    Read the article

  • Exporting Environment Variables in Ubuntu Linux

    - by stanigator
    I know many people have asked about environment variables before, but I am having a hard time dealing with these paths while ensuring I don't mess around with the original settings. How would you go about executing these commands in Ubuntu in terms of environment variables? Thanks in advance! Please put /home/stanley/Downloads/ns-allinone-2.34/bin:/home/stanley/Downloads/ns-allinone-2.34/tcl8.4.18/unix:/home/stanley/Downloads/ns-allinone-2.34/tk8.4.18/unix into your PATH environment; so that you'll be able to run itm/tclsh/wish/xgraph. IMPORTANT NOTICES: (1) You MUST put /home/stanley/Downloads/ns-allinone-2.34/otcl-1.13, /home/stanley/Downloads/ns-allinone-2.34/lib, into your LD_LIBRARY_PATH environment variable. If it complains about X libraries, add path to your X libraries into LD_LIBRARY_PATH. If you are using csh, you can set it like: setenv LD_LIBRARY_PATH If you are using sh, you can set it like: export LD_LIBRARY_PATH= (2) You MUST put /home/stanley/Downloads/ns-allinone-2.34/tcl8.4.18/library into your TCL_LIBRARY environmental variable. Otherwise ns/nam will complain during startup.

    Read the article

  • Managing a test iSCSI target server

    - by dyasny
    Hi all, I am using a RHEL server with a few hard drives, and tgtd as the iscsi target software. I a looking for a way to allocate and deallocate space and targets with that space, without restarting my system, or harming other LUNs. Currently, all my HDDs are PVs in a single VG, and I lvcreate/lvremove as required, and then export the allocated LVs using a tgt script: usr/sbin/tgtadm --lld iscsi --op new --mode target --tid=1 --targetname iqn.2001-04.com.lab.gss:300gb /usr/sbin/tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 -b /dev/mapper/iscsi_vg-iscsi_300Gb /usr/sbin/tgtadm --lld iscsi --op bind --mode target --tid 1 -I ALL /usr/sbin/tgtadm --lld iscsi --op new --mode target --tid=2 --targetname iqn.2001-04.com.lab.gss:200gb /usr/sbin/tgtadm --lld iscsi --op new --mode logicalunit --tid 2 --lun 1 -b /dev/mapper/iscsi_vg-iscsi_200Gb /usr/sbin/tgtadm --lld iscsi --op bind --mode target --tid 2 -I ALL /usr/sbin/tgtadm --lld iscsi --op new --mode target --tid=3 --targetname iqn.2001-04.com.lab.gss:100gb /usr/sbin/tgtadm --lld iscsi --op new --mode logicalunit --tid 3 --lun 1 -b /dev/mapper/iscsi_vg-iscsi_100Gb /usr/sbin/tgtadm --lld iscsi --op bind --mode target --tid 3 -I ALL tgtadm --mode target --op show So in order to remove a LUN, I stop the tgtd service, lvremove the lv, and remove the entry from the iscsi target script When I add a lun, I run lvcreate, and then add an entry to the script and run it. This is not quite optimal, since restarting the service is a bad idea while other LUNs are busy, so I am looking for a more scalable and safer way. Thanks

    Read the article

  • Yet another blog about IValueConverter

    - by codingbloke
    After my previous blog on a Generic Boolean Value Converter I thought I might as well blog up another IValueConverter implementation that I use. The Generic Boolean Value Converter effectively converters an input which only has two possible values to one of two corresponding objects.  The next logical step would be to create a similar converter that can take an input which has multiple (but finite and discrete) values to one of multiple corresponding objects.  To put it more simply a Generic Enum Value Converter. Now we already have a tool that can help us in this area, the ResourceDictionary.  A simple IValueConverter implementation around it would create a StringToObjectConverter like so:- StringToObjectConverter using System; using System.Windows; using System.Windows.Data; using System.Linq; using System.Windows.Markup; namespace SilverlightApplication1 {     [ContentProperty("Items")]     public class StringToObjectConverter : IValueConverter     {         public ResourceDictionary Items { get; set; }         public string DefaultKey { get; set; }                  public StringToObjectConverter()         {             DefaultKey = "__default__";         }         public virtual object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)         {             if (value != null && Items.Contains(value.ToString()))                 return Items[value.ToString()];             else                 return Items[DefaultKey];         }         public virtual object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)         {             return Items.FirstOrDefault(kvp => value.Equals(kvp.Value)).Key;         }     } } There are some things to note here.  The bulk of managing the relationship between an object instance and the related string key is handled by the Items property being an ResourceDictionary.  Also there is a catch all “__default__” key value which allows for only a subset of the possible input values to mapped to an object with the rest falling through to the default. We can then set one of these up in Xaml:-             <local:StringToObjectConverter x:Key="StatusToBrush">                 <ResourceDictionary>                     <SolidColorBrush Color="Red" x:Key="Overdue" />                     <SolidColorBrush Color="Orange" x:Key="Urgent" />                     <SolidColorBrush Color="Silver" x:Key="__default__" />                 </ResourceDictionary>             </local:StringToObjectConverter> You could well imagine that in the model being bound these key names would actually be members of an enum.  This still works due to the use of ToString in the Convert method.  Hence the only requirement for the incoming object is that it has a ToString implementation which generates a sensible string instead of simply the type name. I can’t imagine right now a scenario where this converter would be used in a TwoWay binding but there is no reason why it can’t.  I prefer to avoid leaving the ConvertBack throwing an exception if that can be be avoided.  Hence it just enumerates the KeyValuePair entries to find a value that matches and returns the key its mapped to. Ah but now my sense of balance is assaulted again.  Whilst StringToObjectConverter is quite happy to accept an enum type via the Convert method it returns a string from the ConvertBack method not the original input enum type that arrived in the Convert.  Now I could address this by complicating the ConvertBack method and examining the targetType parameter etc.  However I prefer to a different approach, deriving a new EnumToObjectConverter class instead. EnumToObjectConverter using System; namespace SilverlightApplication1 {     public class EnumToObjectConverter : StringToObjectConverter     {         public override object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)         {             string key = Enum.GetName(value.GetType(), value);             return base.Convert(key, targetType, parameter, culture);         }         public override object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)         {             string key = (string)base.ConvertBack(value, typeof(String), parameter, culture);             return Enum.Parse(targetType, key, false);         }     } }   This is a more belts and braces solution with specific use of Enum.GetName and Enum.Parse.  Whilst its more explicit in that the a developer has to  choose to use it, it is only really necessary when using TwoWay binding, in OneWay binding the base StringToObjectConverter would serve just as well. The observant might note that there is actually no “Generic” aspect to this solution in the end.  The use of a ResourceDictionary eliminates the need for that.

    Read the article

  • How do photoshop slices and layer comps interact?

    - by Steve314
    I'm interested in using Photoshop (I have CS2) for some user interface design. I was hoping to be able to use slices and layer comps to mark out particular elements, and use Javascript scripting to export multiple graphics files and text descriptions (positions and sizes of slices mainly) that will be used by my program. My problem is that I've never used Photoshop for web design, or otherwise used slices, and I'm not confident that I understand how they interact with layer comps. This is what I believe (and hope) is correct... Manual slices aren't affected by layer comps in any way - they aren't saved as part of a layer comp. The same manual slices will be active irrespective of which layer comp is selected. Layer-based slices aren't directly affected by layer comps, but they are indirectly affected in that the layer comp saves details of layer position and style. Thus selecting a layer comp may move a layer and change its style, affecting the location and size of its layer-based slice, or may effectively disable the slice by hiding the layer. Automatic slices aren't directly affected by layer comps, but are indirectly affected due to changes to the layer-based slices. So, layer based slices (which are my main interest) may move, may change size (to accomodate a style such as a drop shadow), and may be effectively disabled by the layer being hidden. Other details (and all details of manual slices) will remain constant irrespective of which layer comp is active. Is that correct?

    Read the article

  • Tell VLC where to look for plugins.dat file

    - by puk
    I am trying to build vlc from source (I will include installation script below), but when I try to run vlc I get the following error main libvlc warning: cannot read /home/user/downloads/vlc3/vlc/src/.libs/vlc/plugins/plugins.dat (No such file or directory) Why is it even looking in that non existant directory? The plugins.dat file is in /usr/lib/vlc/plugins/. I tried export VLC_PLUGIN_PATH=/usr/lib/vlc/plugins/ But it still looks in that non existent path. I can create a symbolic link, but that is a terrible way to do it. If in 6 months I delete my downloads folder, all of a sudden my vlc will break. Here is the script I am running to install: ./configure --enable-rpi-omxil --enable-dvbpsi --enable-x264 --enable-xcb --with-x --enable-xvideo --enable-sdl --enable-avcodec --enable-avformat --enable-swscale --enable-mad --enable-a52 --enable-libmpeg2 --enable-dvdnav --enable-faad --enable-vorbis --enable-ogg --enable-theora --enable-mkv --enable-freetype --enable-fribidi --enable-speex --enable-flac --enable-live555 --enable-caca --enable-skins2 --enable-alsa --enable-ncurses --enable-debug --enable-lirc --enable-live555 --enable-shout --enable-taglib --enable-vcdx --enable-realrtsp --enable-svg --enable-dvdread --enable-dc1394 --enable-twolame --enable-dirac --enable-aa --enable-jack --enable-bluray --enable-opencv --enable-sftp --enable-pulse --enable-projectm --enable-vsxu --enable-atmo --enable-glspectrum '--with-extra-libs=/usr/local/lib' '--with-extra-includes=/usr/local/include' '--x-libraries=/usr/local/lib' '--x-includes=/usr/local/include' '--prefix=/usr/local' '--mandir=/usr/local/man' '--infodir=/usr/local/info/' EDIT: I am using the following version: VLC media player 2.2.0-git Weatherwax (revision 2.1.0-git-1168-g5804dd1) And the --plugin-path option is no longer supported.

    Read the article

  • T-SQL Tuesday #21 - Crap!

    - by Most Valuable Yak (Rob Volk)
    Adam Machanic's (blog | twitter) ever popular T-SQL Tuesday series is being held on Wednesday this time, and the topic is… SHIT CRAP. No, not fecal material.  But crap code.  Crap SQL.  Crap ideas that you thought were good at the time, or were forced to do due (doo-doo?) to lack of time. The challenge for me is to look back on my SQL Server career and find something that WASN'T crap.  Well, there's a lot that wasn't, but for some reason I don't remember those that well.  So the additional challenge is to pick one particular turd that I really wish I hadn't squeezed out.  Let's see if this outline fits the bill: An ETL process on text files; That had to interface between SQL Server and an AS/400 system; That didn't use SSIS (should have) or BizTalk (ummm, no) but command-line scripting, using Unix utilities(!) via: xp_cmdshell; That had to email reports and financial data, some of it sensitive Yep, the stench smell is coming back to me now, as if it was yesterday… As to why SSIS and BizTalk were not options, basically I didn't know either of them well enough to get the job done (and I still don't).  I also had a strict deadline of 3 days, in addition to all the other responsibilities I had, so no time to learn them.  And seeing how screwed up the rest of the process was: Payment files from multiple vendors in multiple formats; Sent via FTP, PGP encrypted email, or some other wizardry; Manually opened/downloaded and saved to a particular set of folders (couldn't change this); Once processed, had to be placed BACK in the same folders with the original archived; x2 divisions that had to run separately; Plus an additional vendor file in another format on a completely different schedule; So that they could be MANUALLY uploaded into the AS/400 system (couldn't change this either, even if it was technically possible) I didn't feel so bad about the solution I came up with, which was naturally: Copy the payment files to the local SQL Server drives, using xp_cmdshell Run batch files (via xp_cmdshell) to parse the different formats using sed, a Unix utility (this was before Powershell) Use other Unix utilities (join, split, grep, wc) to process parsed files and generate metadata (size, date, checksum, line count) Run sqlcmd to execute a stored procedure that passed the parsed file names so it would bulk load the data to do a comparison bcp the compared data out to ANOTHER text file so that I could grep that data out of the original file Run another stored procedure to import the matched data into SQL Server so it could process the payments, including file metadata Process payment batches and log which division and vendor they belong to Email the payment details to the finance group (since it was too hard for them to run a web report with the same data…which they ran anyway to compare the emailed file against…which always matched, surprisingly) Email another report showing unmatched payments so they could manually void them…about 3 months afterward All in "Excel" format, using xp_sendmail (SQL 2000 system) Copy the unmatched data back to the original folder locations, making sure to match the file format exactly (if you've ever worked with ACH files, you'll understand why this sucked) If you're one of the 10 people who have read my blog before, you know that I love the DOS "for" command.  Like passionately.  Like fairy-tale love.  So my batch files were riddled with for loops, nested within other for loops, that called other batch files containing for loops.  I think there was one section that had 4 or 5 nested for commands.  It was wrong, disturbed, and completely un-maintainable by anyone, even myself.  Months, even a year, after I left the company I got calls from someone who had to make a minor change to it, and they called me to talk them out of spraying the office with an AK-47 after looking at this code.  (for you Star Trek TOS fans) The funniest part of this, well, one of the funniest, is that I made the deadline…sort of, I was only a day late…and the DAMN THING WORKED practically unchanged for 3 years.  Most of the problems came from the manual parts of the overall process, like forgetting to decrypt the files, or missing/late files, or saved to the wrong folders.  I'm definitely not trying to toot my own horn here, because this was truly one of the dumbest, crappiest solutions I ever came up with.  Fortunately as far as I know it's no longer in use and someone has written a proper replacement.  Today I would knuckle down and do it in SSIS or Powershell, even if it took me weeks to get it right. The real lesson from this crap code is to make things MAINTAINABLE and UNDERSTANDABLE.  sed scripting regular expressions doesn't fit that criteria in any way.  If you ever find yourself under pressure to do something fast at all costs, DON'T DO IT.  Stop and consider long-term maintainability, not just for yourself but for others on your team.  If you can't explain the basic approach in under 5 minutes, it ultimately won't succeed.  And while you may love to leave all that crap behind, it may follow you anyway, and you'll step in it again.   P.S. - if you're wondering about all the manual stuff that couldn't be changed, it was because the entire process had gone through Six Sigma, and was deemed the best possible way.  Phew!  Talk about stink!

    Read the article

  • Problems starting autossh on boot [ubuntu]

    - by Ken
    I'm trying to automatically start an SSH tunnel to my server on boot from a ubuntu box. I have an ubuntu box that's mounted on an 18-wheeler and is networked behind an air card. The box hosts a mysql database that i'm trying to have replicated when the aircard is connected. As I can never be sure of my IP and how many or which routers I'm behind I'm connected to my replication server with an SSH tunnel. I got that working using the following command: ssh -R 3307:localhost:3307 [email protected] Now I'd like that to start whenever the box is, and be alive all the time, so I installed auto-ssh and setup this little script: ID=xkenneth HOST=erdosmiller.com AUTOSSH_POLL=15 AUTOSSH_PORT=20000 AUTOSSH_GATETIME=30 AUTOSSH_DEBUG=yes AUTOSSH_PATH=/usr/bin/ssh export AUTOSSH_POLL AUTOSSH_DEBUG AUTOSSH_PATH AUTOSSH_GATETIME AUTOSSH_PORT autossh -2 -fN -M 20000 -R 3307:localhost:3306 ${ID}@${HOST} I've tried putting this scrip in /etc/init.d/ and using a post-up command in /etc/network/interfaces as well as putting it in /etc/network/if-up.d/. In both situations the script starts on boot, but the tunnel doesn't appear to be correctly established. The script works when run manually.

    Read the article

  • Install self-signed certificate on local server (iis)

    - by ile
    On this page there are instructions on how to create self-signed cert (on apache) and how to install this certificate on server. I found this page (http://www.visualwin.com/SelfSSL/) with instructions on how to create self-signed certificate on windows (iis). I followed instructions and when I type https://myip/myapp (this leads to localhost because I set my router's port forwarding to go to localhost on my pc) this part works. From the first link, the most important part is this: What needs to be installed in IE is actually the Root CA Certificate. In the how-to above, the Root CA Certificate is called ca.crt. Copy this file to the server that is running QuickBooks. The following is for IE6: - Open IE - Tools - Internet Options - Content - Certificates - Trusted Root Certification Authorities Tab - Import, Next, Browse to 'ca.crt' - Next, Next, Finish, Close, OK The part that is missing in second link is that there is no instruction on how to get .crt file, so I tried to get it myself. What I did was following: I opened https://myip/myapp in Firefox and then "This Connection is Untrusted" screen appeared. Then I clicked on "Add Exception" and then below "Certificate Status" I clicked "View". Under the Details tab I clicked on Export and choosed Save as type: "X 509 Certificate (PEM)" and file was saved with .crt extension. Then I opened IE8 and followed above instructions. After opening https://myip/myapp in IE8 I always get warning screen. Does anyone knows what am I doing wrong? Thanks, Ile

    Read the article

  • Using Hadooop (HDInsight) with Microsoft - Two (OK, Three) Options

    - by BuckWoody
    Microsoft has many tools for “Big Data”. In fact, you need many tools – there’s no product called “Big Data Solution” in a shrink-wrapped box – if you find one, you probably shouldn’t buy it. It’s tempting to want a single tool that handles everything in a problem domain, but with large, complex data, that isn’t a reality. You’ll mix and match several systems, open and closed source, to solve a given problem. But there are tools that help with handling data at large, complex scales. Normally the best way to do this is to break up the data into parts, and then put the calculation engines for that chunk of data right on the node where the data is stored. These systems are in a family called “Distributed File and Compute”. Microsoft has a couple of these, including the High Performance Computing edition of Windows Server. Recently we partnered with Hortonworks to bring the Apache Foundation’s release of Hadoop to Windows. And as it turns out, there are actually two (technically three) ways you can use it. (There’s a more detailed set of information here: http://www.microsoft.com/sqlserver/en/us/solutions-technologies/business-intelligence/big-data.aspx, I’ll cover the options at a general level below)  First Option: Windows Azure HDInsight Service  Your first option is that you can simply log on to a Hadoop control node and begin to run Pig or Hive statements against data that you have stored in Windows Azure. There’s nothing to set up (although you can configure things where needed), and you can send the commands, get the output of the job(s), and stop using the service when you are done – and repeat the process later if you wish. (There are also connectors to run jobs from Microsoft Excel, but that’s another post)   This option is useful when you have a periodic burst of work for a Hadoop workload, or the data collection has been happening into Windows Azure storage anyway. That might be from a web application, the logs from a web application, telemetrics (remote sensor input), and other modes of constant collection.   You can read more about this option here:  http://blogs.msdn.com/b/windowsazure/archive/2012/10/24/getting-started-with-windows-azure-hdinsight-service.aspx Second Option: Microsoft HDInsight Server Your second option is to use the Hadoop Distribution for on-premises Windows called Microsoft HDInsight Server. You set up the Name Node(s), Job Tracker(s), and Data Node(s), among other components, and you have control over the entire ecostructure.   This option is useful if you want to  have complete control over the system, leave it running all the time, or you have a huge quantity of data that you have to bulk-load constantly – something that isn’t going to be practical with a network transfer or disk-mailing scheme. You can read more about this option here: http://www.microsoft.com/sqlserver/en/us/solutions-technologies/business-intelligence/big-data.aspx Third Option (unsupported): Installation on Windows Azure Virtual Machines  Although unsupported, you could simply use a Windows Azure Virtual Machine (we support both Windows and Linux servers) and install Hadoop yourself – it’s open-source, so there’s nothing preventing you from doing that.   Aside from being unsupported, there are other issues you’ll run into with this approach – primarily involving performance and the amount of configuration you’ll need to do to access the data nodes properly. But for a single-node installation (where all components run on one system) such as learning, demos, training and the like, this isn’t a bad option. Did I mention that’s unsupported? :) You can learn more about Windows Azure Virtual Machines here: http://www.windowsazure.com/en-us/home/scenarios/virtual-machines/ And more about Hadoop and the installation/configuration (on Linux) here: http://en.wikipedia.org/wiki/Apache_Hadoop And more about the HDInsight installation here: http://www.microsoft.com/web/gallery/install.aspx?appid=HDINSIGHT-PREVIEW Choosing the right option Since you have two or three routes you can go, the best thing to do is evaluate the need you have, and place the workload where it makes the most sense.  My suggestion is to install the HDInsight Server locally on a test system, and play around with it. Read up on the best ways to use Hadoop for a given workload, understand the parts, write a little Pig and Hive, and get your feet wet. Then sign up for a test account on HDInsight Service, and see how that leverages what you know. If you're a true tinkerer, go ahead and try the VM route as well. Oh - there’s another great reference on the Windows Azure HDInsight that just came out, here: http://blogs.msdn.com/b/brunoterkaly/archive/2012/11/16/hadoop-on-azure-introduction.aspx  

    Read the article

  • CSV engine on MySQL server

    - by Jeff
    I don't think that this is a programming question so I am going to ask it here - Reading the book high performance mysql, I read about the CSV engine. The paragraph says: The CSV engine can treat comma-separated values (CSV) files as table, but it does not support indexes on them. This engine lets you copy files in and out of the database while the server is running. If you export a CSV file from a spreadsheet and save it in the MySQL server's data directory, the server can read it immediately. Similary, if you write data to a CSV table, an external program can read it right away. CSV tables are especially useful as a data interchange format and for certain kinds of logging. What I get from this paragraph is that I can copy a .CSV file into the data directory of database, and it should show as a table that is able to be read from. However, whenever I copy a test .csv file into the directory, it does not appear as a table. I can't access it. I am using MySQL 5.5 also Does anyone know why this is not working, or what I am doing wrong? Thanks

    Read the article

  • Recover mysql database - mysqldump gives "table <tablename> doesn't exist (1146)"

    - by Matthew
    Backstory Ubuntu died (wouldn't boot) and I couldn't fix it. I booted a live cd to recover the important stuff and saved it to my NAS. One of the things I backed up was /var/lib/mysql. Reinstalled with Linux Mint because I was on Ubuntu 10.0.4 this was a good opportunity to try a new distro (and I don't like Unity). Now I want to recover my old mediawiki, so I shut down mysql daemon, cp -R /media/NAS/Backup/mysql/mediawiki@002d1_19_1 /var/lib/mysql/, set file ownership and permissions correctly, and start mysql back up. Problem Now I'm trying to export the database so I can restore the database, but when I execute the mysqldump I get an error: $ mysqldump -u mediawikiuser -p mediawiki-1_19_1 -c | gzip -9 > wiki.2012-11-15.sql.gz Enter password: mysqldump: Got error: 1146: Table 'mediawiki-1_19_1.archive' doesn't exist when using LOCK TABLES Things I've tried I tried using --skip-lock-tables but I get this: Error: Couldn't read status information for table archive () mysqldump: Couldn't execute 'show create table `archive`': Table 'mediawiki-1_19_1.archive' doesn't exist (1146) I tried logging in to mysql and I can list the tables that should be there, but trying to describe or select from them errors out the same way as the dump: mysql> show tables; +----------------------------+ | Tables_in_mediawiki-1_19_1 | +----------------------------+ | archive | | category | | categorylinks | ... | user_properties | | valid_tag | | watchlist | +----------------------------+ 49 rows in set (0.00 sec) mysql> describe archive; ERROR 1146 (42S02): Table 'mediawiki-1_19_1.archive' doesn't exist I believe mediawiki was installed using innodb and binary data. Am I screwed or is there a way to recover this?

    Read the article

  • Installing Tomcat on CentOS 5

    - by andybaird
    Disclaimer: I am not a server admin, I am a windows user that has lead a life of sinful installation wizards and drag and drop I'm attempting to install Tomcat on CentOS 5 hosted by a MediaTemple dedicated virtual server. I basically followed this guide: Installed jpackage and configured the yum.repo.d jpackage file to set enabled=1 Used yum to install java (yum install java) Downloaded the binary distribution of tomcat with "wget http://archive.apache.org/dist/tomcat/tomcat-6/v6.0.14/bin/apache-tomcat-6.0.14.tar.gz" set JAVA_HOME to point at the jdk location I found with "export JAVA_HOME=/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0/" I gunzip/untar the Tomcat files and run ./startup.sh to start the Tomcat server. That is supposed to put the Tomcat server at myserver.com:8080 - however, I just get a could not contact host error when I try to browse to it (or when I try 'curl localhost:8080' from SSH) After I type ./startup.sh, here is the console output: [root@myserver bin]# ./startup.sh Using CATALINA_BASE: /root/apache-tomcat-6.0.14 Using CATALINA_HOME: /root/apache-tomcat-6.0.14 Using CATALINA_TMPDIR: /root/apache-tomcat-6.0.14/temp Using JRE_HOME: /usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0/ [root@myserver bin]# Is there a step I have missed here? Edit: I've now discovered by looking at the log the following error is occuring: Error occurred during initialization of VM Could not reserve enough space for object heap

    Read the article

  • VNC server failed to start CentOS

    - by Shaun
    I followed a tutorial on how to install and get VNCserver to run on CentOS 6 (since freenx isnt supported yet) and I keep getting Starting VNC server: 1:user [FAILED] How do I figure out whats going on here? Im new to Linux/CentOS and im trying to get RDP going so I can step away from SSH as much as possible (you know us Windows users love our pretty GUI's). So, where is the error log at and how do I find it? Or maybe someone else has experienced this and knows the solution based on the simple error given? After running in debug mode, here is my error + . /etc/init.d/functions ++ TEXTDOMAIN=initscripts ++ umask 022 ++ PATH=/sbin:/usr/sbin:/bin:/usr/bin ++ export PATH ++ '[' -z '' ']' ++ COLUMNS=80 ++ '[' -z '' ']' +++ /sbin/consoletype ++ CONSOLETYPE=pty ++ '[' -f /etc/sysconfig/i18n -a -z '' -a -z '' ']' ++ . /etc/profile.d/lang.sh ++ unset LANGSH_SOURCED ++ '[' -z '' ']' ++ '[' -f /etc/sysconfig/init ']' ++ . /etc/sysconfig/init +++ BOOTUP=color +++ RES_COL=60 +++ MOVE_TO_COL='echo -en \033[60G' +++ SETCOLOR_SUCCESS='echo -en \033[0;32m' +++ SETCOLOR_FAILURE='echo -en \033[0;31m' +++ SETCOLOR_WARNING='echo -en \033[0;33m' +++ SETCOLOR_NORMAL='echo -en \033[0;39m' +++ PROMPT=yes +++ AUTOSWAP=no +++ ACTIVE_CONSOLES='/dev/tty[1-6]' +++ SINGLE=/sbin/sushell ++ '[' pty = serial ']' ++ __sed_discard_ignored_files='/\(~\|\.bak\|\.orig\|\.rpmnew\|\.rpmorig\|\.rpmsave\)$/d' + '[' -r /etc/sysconfig/vncservers ']' + . /etc/sysconfig/vncservers ++ VNCSERVERS='1:larry 2:moe 3:curly' ++ VNCSERVERARGS[1]='-geometry 800x600' ++ VNCSERVERARGS[2]='-geometry 640x480' ++ VNCSERVERARGS[3]='-geometry 640x480' + prog='VNC server' + RETVAL=0 + case "$1" in + start + '[' 0 '!=' 0 ']' + . /etc/sysconfig/network ++ NETWORKING=yes ++ HOSTNAME=vps.binaryvisionaries.com ++ DOMAINNAME=server.name ++ GATEWAYDEV=venet0 ++ NETWORKING_IPV6=yes ++ IPV6_DEFAULTDEV=venet0 + '[' yes = no ']' + '[' -x /usr/bin/vncserver ']' + '[' -x /usr/bin/Xvnc ']' + echo -n 'Starting VNC server: ' Starting VNC server: + RETVAL=0 + '[' '!' -d /tmp/.X11-unix ']' + for display in '${VNCSERVERS}' + SERVS=1 + echo -n '1:larry ' 1:larry + DISP=1 + USER=larry + VNCUSERARGS='-geometry 800x600' + runuser -l larry -c 'cd ~larry && [ -r .vnc/passwd ] && vncserver :1 -geometry 800x600' + RETVAL=1 + '[' 1 -eq 0 ']' + break + '[' -z 1 ']' + '[' 1 -eq 0 ']' + failure 'vncserver start' + local rc=1 + '[' color '!=' verbose -a -z '' ']' + echo_failure + '[' color = color ']' + echo -en '\033[60G' + echo -n '[' [+ '[' color = color ']' + echo -en '\033[0;31m' + echo -n FAILED FAILED+ '[' color = color ']' + echo -en '\033[0;39m' + echo -n ']' ]+ echo -ne '\r' + return 1 + '[' -x /usr/bin/plymouth ']' + /usr/bin/plymouth --details + return 1 + echo + '[' 1 -eq 98 ']' + return 1 + exit 1

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #053 – Final Post in Series

    - by Pinal Dave
    It has been a fantastic journey to write memory lane series for an entire year. This series gave me the opportunity to go back and see what I have contributed to this blog throughout the last 7 years. This was indeed fantastic series as this provided me the opportunity to witness how technology has grown throughout the year and how I have progressed in my career while writing this blog post. This series was indeed fantastic experience readers as many joined during the last few years and were not sure what they have missed in recent years. Let us continue with the final episode of the Memory Lane Series. Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Get Current User – Get Logged In User Here is the straight script which list logged in SQL Server users. Disable All Triggers on a Database – Disable All Triggers on All Servers Question : How to disable all the triggers for a database? Additionally, how to disable all the triggers for all servers? For answer execute the script in the blog post. Importance of Master Database for SQL Server Startup I have received following questions many times. I will list all the questions here and answer them together. What is the purpose of Master database? Should our backup Master database? Which database is must have database for SQL Server for startup? Which are the default system database created when SQL Server 2005 is installed for the first time? What happens if Master database is corrupted? Answers to all of the questions are very much related. 2008 DECLARE Multiple Variables in One Statement SQL Server is a great product and it has many features which are very unique to SQL Server. Regarding feature of SQL Server where multiple variable can be declared in one statement, it is absolutely possible to do. 2009 How to Enable Index – How to Disable Index – Incorrect syntax near ‘ENABLE’ Many times I have seen that the index is disabled when there is a large update operation on the table. Bulk insert of very large file updates in any table using SSIS is usually preceded by disabling the index and followed by enabling the index. I have seen many developers running the following query to disable the index. 2010 List of all the Views from Database Many emails I received suggesting that they have hundreds of the view and now have no clue what is going on and how many of them have indexes and how many does not have an index. Some even asked me if there is any way they can get a list of the views with the property of Index along with it. Here is the quick script which does exactly the same. You can also include many other columns from the same view. Minimum Maximum Memory – Server Memory Options I was recently reading about SQL Server Memory Options over here. While reading this one line really caught my attention is minimum value allowed for maximum memory options. The default setting for min server memory is 0, and the default setting for max server memory is 2147483647. The minimum amount of memory you can specify for max server memory is 16 megabytes (MB). 2011 Fundamentals of Columnstore Index There are two kinds of storage in a database. Row Store and Column Store. Row store does exactly as the name suggests – stores rows of data on a page – and column store stores all the data in a column on the same page. These columns are much easier to search – instead of a query searching all the data in an entire row whether the data are relevant or not, column store queries need only to search a much lesser number of the columns. How to Ignore Columnstore Index Usage in Query In summary the question in simple words “How can we ignore using the column store index in selective queries?” Very interesting question – you can use I can understand there may be the cases when the column store index is not ideal and needs to be ignored the same. You can use the query hint IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX to ignore the column store index. The SQL Server Engine will use any other index which is best after ignoring the column store index. 2012 Storing Variable Values in Temporary Array or Temporary List SQL Server does not support arrays or a dynamic length storage mechanism like list. Absolutely there are some clever workarounds and few extra-ordinary solutions but everybody can;t come up with such solution. Additionally, sometime the requirements are very simple that doing extraordinary coding is not required. Here is the simple case. Move Database Files MDF and LDF to Another Location It is not common to keep the Database on the same location where OS is installed. Usually Database files are in SAN, Separate Disk Array or on SSDs. This is done usually for performance reason and manageability perspective. Now the challenges comes up when database which was installed at not preferred default location and needs to move to a different location. Here is the quick tutorial how you can do it. UNION ALL and ORDER BY – How to Order Table Separately While Using UNION ALL If your requirement is such that you want your top and bottom query of the UNION resultset independently sorted but in the same result set you can add an additional static column and order by that column. Let us re-create the same scenario. Copy Data from One Table to Another Table – SQL in Sixty Seconds #031 – Video http://www.youtube.com/watch?v=FVWIA-ACMNo Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Enable DreamScene in Any Version of Vista or Windows 7

    - by DigitalGeekery
    Windows DreamScene was a utility available for Vista Ultimate that allowed users to set video as desktop wallpaper. It was dropped in Windows 7, but we’ll take a look at how to play DreamScenes in all versions of Windows 7 or Vista. Downloading DreamScenes First, you’ll need to find some DreamScenes to download. We’ve found some nice ones at both DreamScene.org and DeviantArt. You can find those download links at the end of the article. They’ll come as compressed files, so you’ll need to extract them after downloading. Windows 7 DreamScene Activator If you are running Windows 7 you can use Windows 7 DreamScene Activator. This free portable utility enables DreamScene in both 32 & 64 bit versions of Windows 7. Users can then set either MPG or WMV files as desktop wallpaper. Download and extract the Windows 7 DreamScene Activator (link below). Once extracted, you’ll need to run the application as administrator. Right-click on the .exe and select Run as administrator. Click on Enable DreamScene. This will also restart Windows Explorer if it is open. To play your DreamScene, browse for the file in Windows Explorer, right-click the file and select Set as Desktop Background. Enjoy your new Windows 7 DreamScene.   Although it says it is for Windows 7 only, we were able to get it to work with no problems on Vista Home Premium x32 as well.   You can Pause the DreamScene at anytime by right-clicking on the desktop and selecting Pause DreamScene.   When you are ready for a change, click Disable DreamScene and switch back to your previous wallpaper. Using VLC Media Player Users of all versions of Windows 7 & Vista can enable a DreamScene using VLC. Recently, we showed you how to set a video as your desktop wallpaper in VLC.  Since DreamScenes are in MPEG or WMV format, we will use the same tactic to display them as desktop wallpaper. We’ll just need to make a few additional tweaks to the VLC settings. You’ll need to download and install VLC media player if you don’t already have it. You can find the download link below. Next, select Tools > Preferences from the Menu. Select the Video button on the left and then choose DirectX video output from the Output dropdown list. Next, select All under Show Settings at the lower left, then select the Video button on the left pane. Uncheck Show media title on video. This will prevent VLC from constantly showing the title of the video on the screen each time the video loops. Click Save and the restart VLC.   Now we will add the video to our playlist and set it to continuously loop. Select View > Playlist from the Menu. Select the Add file button from the bottom of the Playlist window and select Add file.   Browse for your file and click Open.   Click the Loop button at the bottom so the video plays in a continuous loop.   Now, we’re ready to play the video. After the video starts playing, select Video > DirectX Wallpaper from the Menu, then minimize VLC.   If you’re using Aero Themes, you may get a pop-up warning and Windows will switch automatically to a basic theme.   If looping one video gets to be a little repetitive, you can add multiple videos to your playlist in VLC and loop the entire playlist. Just make sure you toggle the Loop button on the playlist window to Loop All. Now you’ve got a nice DreamScene playing on your desktop. Another cool trick you can do with VLC is take snapshots of favorite movie scenes and set them as backgrounds. When you’re ready to go back to your old wallpaper, maximize VLC, select Video and click DirectX Wallpaper again to turn it off the video background. Occasionally we were left with a black screen and had to manually change our wallpaper back to normal even after turning off the DirectX Wallpaper. Note: Keep in mind that using the VLC method takes up a lot of resources so if you try to run it on older hardware, or say a netbook, you’re not going to get good results. We also tried to use the VLC method in XP, but couldn’t get it to work. If you have leave a comment and let us know. While the DreamScene feature never really caught on in Vista, we find them to be a cool way to pump a little life into your desktop on any version of Vista or Windows 7. Downloads DreamScenes from Dreamscene.org DreamScenes from DeviantArt Download VLC media player Windows 7 DreamScene Activator Similar Articles Productive Geek Tips Wait, How do I Turn on DreamScene Again?Enable Run Command on Windows 7 or Vista Start MenuEnable or Disable UAC From the Windows 7 / Vista Command LineUnderstanding Windows Vista Aero Glass RequirementsEnable Mapping to \HostnameC$ Share on Windows 7 or Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Microsoft Office Web Apps Guide Know if Someone Accessed Your Facebook Account Shop for Music with Windows Media Player 12 Access Free Documentaries at BBC Documentaries Rent Cameras In Bulk At CameraRenter Download Songs From MySpace

    Read the article

  • Loop through several servers, find specific dlls , get the dll version, internal filename and path?

    - by Graham
    I am a newby to Powershell, and using PS v2. I can see the massive potential it has, but I just can't get the following code to work fully. I am trying to end up with a csv file that contains the wild carded required dlls in the GAC_MSIL or sub-directory, get the dll version, internal filename and path, and the server IP address. The code is below, and it is in single line format because it appears easier to remote onto one of the servers in the server farm and run the single line from that console, ue to security log-ins etc. The code has produced a set of results, but only for the last server, it probably does the first server, then overwrites it but I am not sure about that. I have done a lot of reading about using arrays, and custom objects, and had a go at doing that, but my scripting skills in PS are not yet up to it. Code: $out = "Ouput_dll_ver_results.csv";foreach ($server in '11.222.33.123', '11.222.33.124') {$VersionInfo = (Get-ChildItem \$server\C$\windows\assembly\GAC_MSIL -recurse -Include abc*.dll,def*.dll,ghi*.dll,jkl*.dll | Where-Object { $.FullName -notmatch "\windows\assembly\temp\" })}; $VersionInfo | %{Get-Command $.FullName} | select -expand File* |Export-Csv $out Can you please advise if/how the above code can be corrected, and if not, what alternatives do I have to get the information I need. Many thanks in advance. Graham

    Read the article

  • Installing Ruby 1.9.3 OSX 10.7.4 breaks after altering PATH

    - by R V
    I was having trouble installing ruby 1.9.3-p194 from ruby 1.8.7 on my mac osx 10.7.4. I have was trying to fix my homebrew after running "brew doctor" and got the message of "/usr/bin occurs before /usr/local/bin This means that system-provided programs will be used instead of those provided by Homebrew. The following tools exist at both paths: c++-4.2 cpp-4.2 erb g++-4.2 gcc-4.2 gcov-4.2 gem i686-apple-darwin11-cpp-4.2.1 i686-apple-darwin11-g++-4.2.1 i686-apple-darwin11-gcc-4.2.1 irb rake rdoc ri ruby testrb" I fixed it by entering the following, which I found on another stackoverflow answer: export PATH="/usr/local/bin:/usr/local/sbin:~/bin$PATH" Lo and behold! when I typed that ruby updates to 1.9.3-p194. Ruby files seem to compile and run just fine. However, afterward, my navigation around terminal is messed up severely. For instance I can't do the command "open example_file.html" and have the file pop up in Chrome, instead I get the error: "-bash: open: command not found" Also, when I change directory, I get an error, inputting "$ cd desktop" yields the output, "-bash: dirname: command not found" but the directory does then changes... strange. When I exit out of a terminal window all this resets. I'm back to Ruby 1.8.7, have to use the PATH command again to update to 1.9.3, command line navigation gets broken again. Any guidance on how to remedy so I can use 1.9.3-p194 and also have normal terminal navigation would be greatly appreciated.

    Read the article

  • nxclient crashes when trying to open a terminal from a remote client through "ssh -Y"

    - by user167328
    I support around 150 linux machines. I have 2 virtual machines on an ESXi server which I access via nxmachine v3 from a windows 7 box. These machines run CentOS5 with KDE and Lubuntu12.04.1 and they are the admin GUIs from which I support the 150 machines. The linux machines which I manage are redhat4/5, CentOS5 and ubuntu 10 and 12. Normally I contact the machines via ssh -Y. Today I did an ssh -Y to a remote machine which is running Ubuntu 12.10 and ssh 6.0p1. Then I tried to open an lxterminal on the remote machine which should display on my KDE desktop. This immediately and reproducably crashed my nxclient session. I tried again from my lubuntu system with the same effect. I have not observed the phenomenon from other machines yet. The message log on my KDE host shows: Unexpected termination of nxagent because of signal: 11 Logger::log nxnode 3920 Googling for this revealed no usable answer. Does anybody have a clue what is going on here or can give a hint how to solve the issue? Add On: I asked the user at the remote machine to export his DISPLAY to my host and open an lxterminal. This worked without problems i. e. the nxclient did not crash. Then the user tried to send me xeyes and this also killed the nxclient with the same error message found in the message log as above. This makes me suspect that the problem is not solely connected to ssh but maybe to some library stuff.

    Read the article

  • Saving a file in a CSV type in Excel always removes the BOM

    - by rickp
    I've been trying to find a reasonable solution/explanation (unsuccessfully) to find out why Excel defaults to removing the BOM when saving a file to the CSV type. Please forgive me if you find this a duplicate of this question. This handles reading CSV files with non-ASCII encoding, but it doesn't cover saving the file back out (which is where the biggest issue lies). Here is my current situation (which I'm going to gather is common among localized software dealing with Unicode characters and a CSV format): We export data to a CSV format using UTF-16LE, ensuring the BOM is set (0xFFFE). We validate after the file is generated with a Hex editor to ensure it was set correctly. Open the file in Excel (for this example we're exporting Japanese characters) and witness that Excel handles loading the file with the correct encoding. Attempts to save this file will prompt you with a warning message indicating that the file may contain features that may not be compatible with Unicode encoding, but asks if you'd like to save anyway. If you select the Save As dialog, it will immediately ask you to save the file as "Unicode Text" rather than CSV. If you select the "CSV" extension and save the file it removes the BOM (obviously along with all the Japanese characters). Why would this happen? Is there a solution to this problem, or is this a known 'bug'/limitation of Excel? Additionally (as a side issue) it appears that Excel, when loading UTF-16LE encoded CSV files, only uses TAB delimiters. Again, is this another known 'bug'/limitation of Excel?

    Read the article

  • Add and remove letterhead in Word document

    - by Daniel Wolf
    Our company has letterheaded paper (pre-printed paper with our logo on it). Whenever we send something out by mail, we print it on that paper. However, when we send the same document via email, we convert it to a PDF file. Now the problem is: when converting a Word document to PDF, it should contain the letterhead. When printing the same document on paper, it should not (or else the letterhead would be printed twice). Currently, we are using two different Word document templates - one with letterhead, one without. So whenever we want to add or remove the letterhead, we have to create a new document with the other template and copy and paste everything over. Nasty solution. What I'm looking for is some simple way to switch the letterhead on and off. What I've tried so far: Switching the template: There does not seem to be a simple way to switch the template for an existing document. Using a picture watermark: Our letterhead goes all the way to the border of the page. (No printer supports this, of course, but it is fine for export to PDF.) Apparently depending on the current default printer, Word will not allow a borderless watermark, instead shifting the image around. Using the page header: When editing the page header, I can insert pictures at arbitrary positions, which is great. However, I could not find a way (short of macros) to enable/disable just the pictures in the header. (The text should remain there.)

    Read the article

< Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >