Search Results

Search found 7183 results on 288 pages for 'export to excel'.

Page 149/288 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • RPi and Java Embedded GPIO: Writing Java code to blink LED

    - by hinkmond
    So, you've followed the previous steps to install Java Embedded on your Raspberry Pi ?, you went to Fry's and picked up some jumper wires, LEDs, and resistors ?, you hooked up the wires, LED, and resistor the the correct pins ?, and now you want to start programming in Java on your RPi? Yes? ???????! OK, then... Here we go. You can use the following source code to blink your first LED on your RPi using Java. In the code you can see that I'm not using any complicated gpio libraries like wiringpi or pi4j, and I'm not doing any low-level pin manipulation like you can in C. And, I'm not using python (hell no!). This is Java programming, so we keep it simple (and more readable) than those other programming languages. See: Write Java code to do this In the Java code, I'm opening up the RPi Debian Wheezy well-defined file handles to control the GPIO ports. First I'm resetting everything using the unexport/export file handles. (On the RPi, if you open the well-defined file handles and write certain ASCII text to them, you can drive your GPIO to perform certain operations. See this GPIO reference). Next, I write a "1" then "0" to the value file handle of the GPIO0 port (see the previous pinout diagram). That makes the LED blink. Then, I loop to infinity. Easy, huh? import java.io.* /* * Java Embedded Raspberry Pi GPIO app */ package jerpigpio; import java.io.FileWriter; /** * * @author hinkmond */ public class JerpiGPIO { static final String GPIO_OUT = "out"; static final String GPIO_ON = "1"; static final String GPIO_OFF = "0"; static final String GPIO_CH00="0"; /** * @param args the command line arguments */ public static void main(String[] args) { FileWriter commandFile; try { /*** Init GPIO port for output ***/ // Open file handles to GPIO port unexport and export controls FileWriter unexportFile = new FileWriter("/sys/class/gpio/unexport"); FileWriter exportFile = new FileWriter("/sys/class/gpio/export"); // Reset the port unexportFile.write(GPIO_CH00); unexportFile.flush(); // Set the port for use exportFile.write(GPIO_CH00); exportFile.flush(); // Open file handle to port input/output control FileWriter directionFile = new FileWriter("/sys/class/gpio/gpio"+GPIO_CH00+"/direction"); // Set port for output directionFile.write(GPIO_OUT); directionFile.flush(); /*--- Send commands to GPIO port ---*/ // Opne file handle to issue commands to GPIO port commandFile = new FileWriter("/sys/class/gpio/gpio"+GPIO_CH00+"/value"); // Loop forever while (true) { // Set GPIO port ON commandFile.write(GPIO_ON); commandFile.flush(); // Wait for a while java.lang.Thread.sleep(200); // Set GPIO port OFF commandFile.write(GPIO_OFF); commandFile.flush(); // Wait for a while java.lang.Thread.sleep(200); } } catch (Exception exception) { exception.printStackTrace(); } } } Hinkmond

    Read the article

  • Using Oracle Linux iSCSI targets with Oracle VM

    - by wim.coekaerts
    A few days ago I had written a blog entry on how to use Oracle Solaris 10 (in my case), ZFS and the iSCSI target feature in Oracle Solaris to create a set of devices exported to my Oracle VM server. Oracle Linux can do this as well and I wanted to make sure I also tried out how to do this on Oracle Linux and here are the results. When you install Oracle Linux 5 update 5 (anything newer than update 3), it comes with an rpm called scsi-target-utils. To begin your quest, should you choose to accept it :) make sure this is installed. rpm -qa |grep scsi-target If it is not installed : up2date scsi-target-utils The target utils come with a tool tgtadm which is similar to iscsitadm on Oracle Solaris. There are 2 components again on the iSCSI server side. (1) create volumes - we will use lvm with lvcreate (2) expose a target using tgtadm. My server has a simple setup. All the disks are part of a single volume group called vgroot. To export a 50Gb volume I just create a new volume : lvcreate -L 50G -nmytest1 vgroot This will show up as a new volume in /dev/mapper as /dev/mapper/vgroot-mytest1. Create as many as you want for your environment. Since I already have my blog entry about the 5 volumes, I am not going to repeat the whole thing. You can just go look at the previous blog entry. Now that we have created the volume, we need to use tgtadm to set it up : make sure the service is running : /etc/init.d/tgtd start or service tgtd start (if you want to keep it running you can do chkconfig tgtd on to start it automatically at boottime) Next you need a targetname to set everything up. My recommendation would be to install iscsi-initiator-utils . This will create an iscsi id and put it in /etc/iscsi/initiatorname.iscsi. For convenience you can do : source /etc/iscsi/initiatorname.iscsi echo $InitiatorName and from here on use $InitiatorName instead of the long complex iqn. create your target : tgtadm --lld iscsi --op new --mode target --tid 1 -T $InitiatorName to show the status : tgtadm --lld iscsi --op show --mode target add the volume previously created : tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 -b /dev/mapper/vgroot-mytest1 re-run status to see it's there : tgtadm --lld iscsi --op show --mode target and just like on Oracle Solaris you now have to export (bind) it : tgtadm --lld iscsi --op bind --mode target --tid 1 -I iqn.1986-03.com.sun:01:2a7526f0ffff If you want to export the lun to every iscsi initiator then replace the iqn with ALL. Of course you have to add the iqn of each iscsi initiator or client you want to connect. In the case of my 2 node Oracle VM server setup, both Oracle VM server's initiator names would have to be added. use status again to see that it has this iqn under ACL tgtadm --lld iscsi --op show --mode target You can drop the --lld iscsi if you want, or alias it. It just makes the command line more obvious as to what you are doing. Oracle VM side : Refer back to the previous blog entry for the detailed setup of my Oracle VM server volumes but the exact same commands will be used there. discover : iscsiadm --mode discovery --type sendtargets --portal login : iscsiadm --mode node --targetname iscsi targetname --portal --login get devices : /etc/init.d/iscsi restart and voila you should be in business. have fun.

    Read the article

  • HOWTO Turn off SPARC T4 or Intel AES-NI crypto acceleration.

    - by darrenm
    Since we released hardware crypto acceleration for SPARC T4 and Intel AES-NI support we have had a common question come up: 'How do I test without the hardware crypto acceleration?'. Initially this came up just for development use so developers can do unit testing on a machine that has hardware offload but still cover the code paths for a machine that doesn't (our integration and release testing would run on all supported types of hardware anyway).  I've also seen it asked in a customer context too so that we can show that there is a performance gain from the hardware crypto acceleration, (not just the fact that SPARC T4 much faster performing processor than T3) and measure what it is for their application. With SPARC T2/T3 we could easily disable the hardware crypto offload by running 'cryptoadm disable provider=n2cp/0'.  We can't do that with SPARC T4 or with Intel AES-NI because in both of those classes of processor the encryption doesn't require a device driver instead it is unprivileged user land callable instructions. Turns out there is away to do this by using features of the Solaris runtime loader (ld.so.1). First I need to expose a little bit of implementation detail about how the Solaris Cryptographic Framework is implemented in Solaris 11.  One of the new Solaris 11 features of the linker/loader is the ability to have a single ELF object that has multiple different implementations of the same functions that are selected at runtime based on the capabilities of the machine.  The alternate to this is having the application coded to call getisax() and make the choice itself.  We use this functionality of the linker/loader when we build the userland libraries for the Solaris Cryptographic Framework (specifically libmd.so, and the unfortunately misnamed due to historical reasons libsoftcrypto.so) The Solaris linker/loader allows control of a lot of its functionality via environment variables, we can use that to control the version of the cryptographic functions we run.  To do this we simply export the LD_HWCAP environment variable with values that tell ld.so.1 to not select the HWCAP section matching certain features even if isainfo says they are present.  For SPARC T4 that would be: export LD_HWCAP="-aes -des -md5 -sha256 -sha512 -mont -mpul" and for Intel systems with AES-NI support: export LD_HWCAP="-aes" This will work for consumers of the Solaris Cryptographic Framework that use the Solaris PKCS#11 libraries or use libmd.so interfaces directly.  It also works for the Oracle DB and Java JCE.  However does not work for the default enabled OpenSSL "t4" or "aes-ni" engines (unfortunately) because they do explicit calls to getisax() themselves rather than using multiple ELF cap sections. However we can still use OpenSSL to demonstrate this by explicitly selecting "pkcs11" engine  using only a single process and thread.  $ openssl speed -engine pkcs11 -evp aes-128-cbc ... type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 54170.81k 187416.00k 489725.70k 805445.63k 1018880.00k $ LD_HWCAP="-aes" openssl speed -engine pkcs11 -evp aes-128-cbc ... type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 29376.37k 58328.13k 79031.55k 86738.26k 89191.77k We can clearly see the difference this makes in the case where AES offload to the SPARC T4 was disabled. The "t4" engine is faster than the pkcs11 one because there is less overhead (again on a SPARC T4-1 using only a single process/thread - using -multi you will get even bigger numbers). $ openssl speed -evp aes-128-cbc ... type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 85526.61k 89298.84k 91970.30k 92662.78k 92842.67k Yet another cool feature of the Solaris linker/loader, thanks Rod and Ali. Note these above openssl speed output is not intended to show the actual performance of any particular benchmark just that there is a significant improvement from using hardware acceleration on SPARC T4. For cryptographic performance benchmarks see the http://blogs.oracle.com/BestPerf/ postings.

    Read the article

  • RPi and Java Embedded GPIO: Big Data and Java Technology

    - by hinkmond
    Java Embedded and Big Data go hand-in-hand, especially as demonstrated by prototyping on a Raspberry Pi to show how well the Java Embedded platform can perform on a small embedded device which then becomes the proof-of-concept for industrial controllers, medical equipment, networking gear or any type of sensor-connected device generating large amounts of data. The key is a fast and reliable way to access that data using Java technology. In the previous blog posts you've seen the integration of a static electricity sensor and the Raspberry Pi through the GPIO port, then accessing that data through Java Embedded code. It's important to point out how this works and why it works well with Java code. First, the version of Linux (Debian Wheezy/Raspian) that is found on the RPi has a very convenient way to access the GPIO ports through the use of Linux OS managed file handles. This is key in avoiding terrible and complex coding using register manipulation in C code, or having to program in a less elegant and clumsy procedural scripting language such as python. Instead, using Java Embedded, allows a fast way to access those GPIO ports through those same Linux file handles. Java already has a very easy to program way to access file handles with a high degree of performance that matches direct access of those file handles with the Linux OS. Using the Java API java.io.FileWriter lets us open the same file handles that the Linux OS has for accessing the GPIO ports. Then, by first resetting the ports using the unexport and export file handles, we can initialize them for easy use in a Java app. // Open file handles to GPIO port unexport and export controls FileWriter unexportFile = new FileWriter("/sys/class/gpio/unexport"); FileWriter exportFile = new FileWriter("/sys/class/gpio/export"); ... // Reset the port unexportFile.write(gpioChannel); unexportFile.flush(); // Set the port for use exportFile.write(gpioChannel); exportFile.flush(); Then, another set of file handles can be used by the Java app to control the direction of the GPIO port by writing either "in" or "out" to the direction file handle. // Open file handle to input/output direction control of port FileWriter directionFile = new FileWriter("/sys/class/gpio/gpio" + gpioChannel + "/direction"); // Set port for input directionFile.write("in"); // Or, use "out" for output directionFile.flush(); And, finally, a RandomAccessFile handle can be used with a high degree of performance on par with native C code (only milliseconds to read in data and write out data) with low overhead (unlike python) to manipulate the data going in and out on the GPIO port, while the object-oriented nature of Java programming allows for an easy way to construct complex analytic software around that data access functionality to the external world. RandomAccessFile[] raf = new RandomAccessFile[GpioChannels.length]; ... // Reset file seek pointer to read latest value of GPIO port raf[channum].seek(0); raf[channum].read(inBytes); inLine = new String(inBytes); It's Big Data from sensors and industrial/medical/networking equipment meeting complex analytical software on a small constraint device (like a Linux/ARM RPi) where Java Embedded allows you to shine as an Embedded Device Software Designer. Hinkmond

    Read the article

  • Setup and configure a MVC4 project for Cloud Service(web role) and SQL Azure

    - by MagnusKarlsson
    I aim at keeping this blog post updated and add related posts to it. Since there are a lot of these out there I link to others that has done kind of the same before me, kind of a blog-DRY pattern that I'm aiming for. I also keep all mistakes and misconceptions for others to see. As an example; if I hit a stacktrace I will google it if I don't directly figure out the reason for it. I will then probably take the most plausible result and try it out. If it fails because I misinterpreted the error I will not delete it from the log but keep it for future reference and for others to see. That way people that finds this blog can see multiple solutions for indexed stacktraces and I can better remember how to do stuff. To avoid my errors I recommend you to read through it all before going from start to finish.The steps:Setup project in VS2012. (msdn blog)Setup Azure Services (half of mpspartners.com blog)Setup connections strings and configuration files (msdn blog + notes)Export certificates.Create Azure package from vs2012 and deploy to staging (same steps as for production).Connections string error Set up the visual studio project:http://blogs.msdn.com/b/avkashchauhan/archive/2011/11/08/developing-asp-net-mvc4-based-windows-azure-web-role.aspx Then login in to Azure to setup the services:Stop following this guide at the "publish website" part since we'll be uploading a package.http://www.mpspartners.com/2012/09/ConfiguringandDeployinganMVC4applicationasaCloudServicewithAzureSQLandStorage/ When set up (connection strings for debug and release and all), follow this guide to set up the configuration files:http://msdn.microsoft.com/en-us/library/windowsazure/hh369931.aspxTrying to package our application at this step will generate the following warning:3>MvcWebRole1(0,0): warning WAT170: The configuration setting 'Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString' is set up to use the local storage emulator for role 'MvcWebRole1' in configuration file 'ServiceConfiguration.Cloud.cscfg'. To access Windows Azure storage services, you must provide a valid Windows Azure storage connection string. Right click the web role under roles in solution manager and choose properties. Choose "Service configuration: Cloud". At "specify storage account credentials" we will copy/paste our account name and key from the Azure management platform.3.1 4. Right click Remote desktop Configuration and select certificate and export to file. We need to allow it in Portal manager.4.15 Now right click the cloud project and select package.5.1 Showing dialogue box. 5.2 Package success Now copy the path to the packaged file and go to management portal again. Click your web role and choose staging (or production). Upload. 5.3Tick the box about the single instance if that's what you want or you don't know what it means. Otherwise the following will happen (see image 4.6)5.4 Dialogue box When you have clicked the symbol for accept- button you will see the following screen with some green indicators down at the right corner. Click them if you want to see status.5.5 Information screen.5.6 "Failed to deploy application. The upload application has at least one role with only one instance. We recommend that you deploy at least two instances per role to ensure high availability in case one of the instances becomes unavailable. "To fix, go to step 5.4If you forgot to (or just didn't know you were supposed to) export your certificates. The following error will occur. Side note, the following thread suggests. To prevent: "Enable Remote Desktop for all roles" when right-clicking BIAB and choosing "Package". But in my case it was the not so present certificates. I fund the solution here.http://social.msdn.microsoft.com/Forums/en-US/dotnetstocktradersampleapplication/thread/0e94c2b5-463f-4209-86b9-fc257e0678cd5.75.8 Success! 5.9 Nice URL n' all. (More on that at another blog post).6. If you try to login and getWhen this error occurs many web sites suggest this is because you need:http://nuget.org/packages/Microsoft.AspNet.Providers.LocalDBOr : http://nuget.org/packages/Microsoft.AspNet.ProvidersBut it can also be that you don't have the correct setup for converting connectionstrings between your web.config to your debug.web.config(or release.web.config, whichever your using).Run as suggested in the "ordinary project in your solution. Go to the management portal and click update.

    Read the article

  • Open Office crashes, recovers, crashes again

    - by Daniel R Hicks
    After completely reinstalling my laptop due to apparent registry corruption, I've encountered a problem with Open Office: I open a simple Calc spreadsheet, it comes up normally, but then after anywhere from 5 seconds to several minutes (without even touching the Calc window) OO crashes, then comes up through recovery. If I let it "recover" it will do so and bring the spreadsheet up again, only to repeat the crash scenario again. If I kept clicking "OK" it would apparently do this all day. I reinstalled OO once and the problem went away for awhile, but it came back. I then attempted to "reset" my profile (ie, rename the OO user directory in App Data), but OO crashed during the first startup after that, then resumed the original behavior. If I open the same file using Excel it complains of errors in the file, and "recovers" them, but the "error report" it generates contains no details. If I save the "recovered" file then OO Calc will open it, but the problem returns after saving again. Any ideas? (The system is Vista SP2, running OO 3.4.1) How to reproduce: Start Open Office Calc. Save workspace as "CrashTest.ods" From Task Manager kill Open Office (soffice.exe/bin -- one of each) Double click on the saved "CrashTest.ods" in Explorer. OO puts up a message that recovery will occur -- allow it. When the Calc window comes up, don't touch it -- just wait about 10 seconds. Calc window closes and OO puts up a message that recovery will occur -- from now on the sequence will repeat. I suspect this behavior is limited to a few (recent) versions of OO, and very possibly only Calc. Reported as Open Office Bug 1211094. Sigh!! As much as it irritates me, I'm having to switch over to Excel for several things I used to do with Calc. Excel has a miserable UI, but at least it says up for longer than 10 seconds.

    Read the article

  • Microsoft Office 2003 applications crash on 'Save As' to a network mapped drive

    - by Archit Baweja
    Hey guys, so I'm not sure if it belongs on ServerFault forums so figured I'd ask here first because its a workstation/client side issue. I have a client where we have windows server 2003 setup, with windows xp professional setup on all the workstations. We've setup a 'domain' and all workstations logon to the domain (authenticated by the Windows Domain Controller), and in the logon script we map drives on to each workstation. Everything is working peachy except for one workstation, where when I open a file in excel from a mapped drive, it opens fine, but when I go to hit Save As, the Save As dialog pops and hangs up. I cannot perform any other action in excel. When I try cancel the Save As dialog, excel crashes. The mapped drive opens up fine in Windows Explorer. To further investigate this issue, I created a new blank text document on the network drive in Windows Explorer. I then opened it. Then hit save as, and the Save As dialog opened up fine and it would let me save the document. I repeated the above steps for a word document. However this time the Save As dialog hung/froze again. So I'd imagine its a Microsoft Office Issue. Any ideas?

    Read the article

  • Office 2007 constantly crashes, logged as Event ID 1000

    - by Nori
    I have a user, who despite my best efforts, is having constant Office 2007 crashes. I've tried deleting their profile and setting it up again, repairing office, uninstalling completely and then reinstalling, and swapping out memory sticks. One event log error I keep getting is the following: (note all the Office errors are event id 1000) Faulting application name: OUTLOOK.EXE, version: 12.0.6539.5000, time stamp: 0x4c12486d Faulting module name: EMSMDB32.DLL, version: 12.0.6539.5000, time stamp: 0x4c1246f8 Exception code: 0xc0000005 Fault offset: 0x0005d8e2 Faulting process id: 0xf6c Faulting application start time: 0x01cb6633f33384f3 Faulting application path: C:\Program Files (x86)\Microsoft Office\Office12\OUTLOOK.EXE Faulting module path: c:\progra~2\micros~1\office12\EMSMDB32.DLL Report Id: 0d4a2eab-d231-11df-80a0-4061868f5d10 I also get this: Faulting application name: OUTLOOK.EXE, version: 12.0.6539.5000, time stamp: 0x4c12486d Faulting module name: olmapi32.dll, version: 12.0.6538.5000, time stamp: 0x4bfc6ad9 Exception code: 0xc0000005 Fault offset: 0x002357a9 Faulting process id: 0x5e4 Faulting application start time: 0x01cb661f4546aa77 Faulting application path: C:\Program Files (x86)\Microsoft Office\Office12\OUTLOOK.EXE Faulting module path: c:\progra~2\micros~1\office12\olmapi32.dll Report Id: a4a90658-d224-11df-80a0-4061868f5d10 The Excel error is this: Faulting application name: EXCEL.EXE, version: 12.0.6535.5002, time stamp: 0x4bd2a7f1 Faulting module name: KERNELBASE.dll, version: 6.1.7600.16385, time stamp: 0x4a5bdbdf Exception code: 0xe06d7363 Fault offset: 0x0000b727 Faulting process id: 0x14a8 Faulting application start time: 0x01cb61ab7bc0abab Faulting application path: C:\Program Files (x86)\Microsoft Office\Office12\EXCEL.EXE Faulting module path: C:\Windows\syswow64\KERNELBASE.dll Report Id: ba0c454b-cd9e-11df-80a0-4061868f5d10 Also have gotten this for PowerPoint: Faulting application name: POWERPNT.EXE, version: 12.0.6500.5000, time stamp: 0x49a68f9d Faulting module name: COMShim.dll, version: 2010.3.325.110, time stamp: 0x4c51e0b1 Exception code: 0x40000015 Fault offset: 0x0001e388 Faulting process id: 0x1480 Faulting application start time: 0x01cb5fe9a0660e81 Faulting application path: C:\Program Files (x86)\Microsoft Office\Office12\POWERPNT.EXE Faulting module path: C:\Program Files (x86)\FactSet\COMShim.dll Report Id: e03d2a21-cbdc-11df-9bc8-4061868f5d10 (Some of the above lines edited to keep you from scroll horizontally.) Lastly, I get this error several times a day, I don't think it is related but maybe it is: Failed extract of third-party root list from auto update cab at: http://www.download.windowsupdate.com/msdownload/update/v3/static/trustedr/en/authrootstl.cab with error: A required certificate is not within its validity period when verifying against the current system clock or the timestamp in the signed file. Any ideas? This is driving me nuts.

    Read the article

  • Five stars of open data - example and review

    - by Joe
    (there may be a more suited SE site for this question so feel free to shift) I have some data I'd like to make open to the public - It's synatesis of some related data retrived from freedom of infomation requests over the last year. The data itself is at http://www.cs.rhul.ac.uk/home/joseph/domesday/Domesday-Scotland.csv or for fans of Excel, at http://www.cs.rhul.ac.uk/home/joseph/domesday/Domesday-Scotland.xlsx . It's no more than a table with about five columns. I'd like to make this properly open data, so I was looking at the 5 star deployment scheme for Open Data. Much of which is fine but I'm confused towards the end and I could do with an explenation from people who know the answers. So to get achieve the star levels I need: "make your stuff available on the Web (whatever format) under an open license" trival - all I have to do is put the notes up on the page that will give the provance of the data. "make it available as structured data (e.g., Excel instead of image scan of a table)"… done… "use non-proprietary formats (e.g., CSV instead of Excel)" - done… "use URIs to identify things, so that people can point at your stuff" - this is where I start to get a bit hazy - does this mean there should be an URI for every line in the table? "link your data to other data to provide context" - this isn't massively clear to me - does this mean to give the provence of the data? One column of the data I've put out is a link to where the data came from - is that the sort of thing we're looking at? Any and all information and answers welcome… EDIT - or if anyone wants to recommend a place SE or other place to ask the question - that would be cool...

    Read the article

  • Drupal 7: One-time user account

    - by Noob
    I'm going to create a survey in Drupal 7 with the webform module, installed on a debian system which may be adapted in every way. The users (personally known, approx. 120) doing that survey will walk into a room and complete the survey in browsers on different computers. After that, they'll leave the room and other persons will enter, complete the survey on the same computers and so on. Each user may enter only one submission. The process needs to be anonymous, i. e. I mustn't have any idea of who did wich submission. My current solution is to generate random one-time-passwords and hand out one password per user (without noting who got which password). Within the survey there will be a password field where the one-time-password is entered. The value is checked by webform to be unique. I'll get the data via csv or Excel and verify the passwords manually in excel by comparing them to the list of valid passwords. The problem is: I don't like the idea of manually generating the password list, copying it to excel and doing a manual check. That's a good idea for one-time-use, but we're going to repeat the survey every once in a while. I'd rather generate one-time-logins (like user0001/fdlkjewf, user0002/dfrefnnr, ...) for each survey, hand them out to the users and let drupal/debian/whatever check whether a submission is valid or not. Do you have any idea how to batch-generate about 120 users with one-time-passwords in Drupal 7 and verify that each user may submit the form only once? Do you even have a better idea how to accomplish the task within the intranet? Thank you for your help.

    Read the article

  • Find and Replace String in filenames

    - by shekhar
    I have thousands of files with no specific extensions. What I need to do is to search for a sting in filename and replace with other string and further search for second string and replace with any other string and so on. I.e.: I have multiple strings to replace with other multiple strings. It may be like: "abc" in filename replaced with "def" * String "abc" may be in many files "jkl" in filename replaced with "srt" * String "jkl" may be in many files "pqr" in filename replaced with "xyz" * String "pqr" may be in many files I am currently using excel macro to get the file names in excel and then preserving original names in one column and replacing desired in the content copied in other column. then I create a batch file for the same. Like: rename Path\OriginalName1 NewName1 rename Path\OriginalName2 NewName2 Problem with the above procedure is that it takes a lot of time as the files are many. And As I am using excel 2003 there is limitation on number of rows as well. I need a script in batch like: replacestr abc with def replacestr pqr with xyz in a single directory. Will it be better to do in unix script?

    Read the article

  • Small business server 2011 standard - applications randomly closing for remote desktop users

    - by Ash King
    Small business server 2011 standard - applications randomly closing for remote desktop users I have an issue where when you are connected through remote desktop (doesn't matter whether you have administrative rights or not). What happens: Any application that you run (outlook, word, excel, notepad, cmd etc..) the application will randomly crash and produce an error as such: Faulting application name: EXCEL.EXE, version: 14.0.6112.5000, time stamp: 0x4e9b2b30 Faulting module name: ieframe.dll, version: 8.0.7600.16930, time stamp: 0x4eeb0187 Exception code: 0xc0000005 Fault offset: 0x0000000000131e03 Faulting process id: 0x3d4c Faulting application start time: 0x01cecf3491388e43 Faulting application path: C:\Program Files\Microsoft Office\Office14\EXCEL.EXE Faulting module path: C:\Windows\System32\ieframe.dll Report Id: 1c06abd4-3b2b-11e3-bd8d-001999b270e9 I noticed the ieframe.dll, but its not constant for every application that crashes, e.g.: Faulting application name: OUTLOOK.EXE, version: 14.0.6109.5005, time stamp: 0x4e79b6c0 Faulting module name: PSTOREC.DLL_unloaded, version: 0.0.0.0, time stamp: 0x4a5be02a Exception code: 0xc0000005 Fault offset: 0x000007fef39c7158 Faulting process id: 0x43f8 Faulting application start time: 0x01cecf33fe5eec26 Faulting application path: C:\Program Files\Microsoft Office\Office14\OUTLOOK.EXE Faulting module path: PSTOREC.DLL Report Id: 0c0f5934-3b2b-11e3-bd8d-001999b270e9 I am unable to perform a sfc /scannow command due to the cmd.exe crashing as well.. I have performed a virus scan on the server which did originally pick up 5 viruses: riskware.tool.ck -> File riskware.tool.ck - > Memory Process trojan.agent.bdavgen -> File trojan.agent -> File HiJack.comsysapp -> Registry Data But after removing these and rebooting the machine we have had no luck Has anyone else ever come across this issue before? Also to elaborate it is happening as frequently as every minute.

    Read the article

  • Microsoft Office 2003 applications crash on 'Save As' to a network mapped drive

    - by Archit Baweja
    Hey guys, so I'm not sure if it belongs on ServerFault forums so figured I'd ask here first because its a workstation/client side issue. I have a client where we have windows server 2003 setup, with windows xp professional setup on all the workstations. We've setup a 'domain' and all workstations logon to the domain (authenticated by the Windows Domain Controller), and in the logon script we map drives on to each workstation. Everything is working peachy except for one workstation, where when I open a file in excel from a mapped drive, it opens fine, but when I go to hit Save As, the Save As dialog pops and hangs up. I cannot perform any other action in excel. When I try cancel the Save As dialog, excel crashes. The mapped drive opens up fine in Windows Explorer. To further investigate this issue, I created a new blank text document on the network drive in Windows Explorer. I then opened it. Then hit save as, and the Save As dialog opened up fine and it would let me save the document. I repeated the above steps for a word document. However this time the Save As dialog hung/froze again. So I'd imagine its a Microsoft Office Issue. Any ideas?

    Read the article

  • Publishing toolchain

    - by Marcelo de Moraes Serpa
    Hello all, I have a book project which I'd like to start sooner than later. This would follow an agile-like publishing workflow, i.e: publish early and often. It is meant to be self-publsihed by me and I'm not really looking to paper-publish it, even though we never know. If I weren't a geek, I'd probably have already started writting in Word or any other WYSIWYG tool and just export to PDF. However, we know it is not the best solution, and emacs rules my text-editing life, so, the output format should be as simple as possible and be text-based. I've thought about the following options: 1) Use orgmode and export to PDF; 2) Use markdown mode and export to PDF; 3) Use something similar to what the guys @ Pragmatic Progammers do: A XML + XSLT + LaTeX. More complex, but much more control over the style. Any other ideas / references ? I want to start writting as soon as possible. In fact, I already have a draft in an org-formatted file. However, I do want to have and use the full power of LaTex later on to format it the way I want and make it look fabulous :) Thanks in advance, Marcelo.

    Read the article

  • relational data from xml

    - by Beta033
    Our problem is this, we have a relational database to store objects in tables. As any relational database could, several of these tables have multiple foreign keys pointing to other tables (all pretty normal stuff) We've been trying to identify a solution to allow export of this relational data, ideally, only 1 of the objects in the model, to some sort of file (xml, text, ??). So it wouldn't be simple enough to just export 1 table as data stored in other tables would contribute to the complete model of the object. Something like the following picture: Toward this, i've written a routine to export the structure by following the foreign key paths which exports something similar to the following. <Tables> <TableA PK="1", val1, val2, val3> <TableC PK="1", FK_A="1", Val1, val2, val3> <TableC PK="2", FK_A="1", val1, val2, val3> <TableB PK="1", FK_A="1", FK_C="1", val1, val2, val3> <TableB PK="2", FK_A="1", FK_C="2", val1, val2, val3> <TableD PK="1", FK_B="1", FK_C="1", val1> <TableD PK="2", FK_B="2", FK_C="1", val1> </Tables> However, given this structure, it cannot be placed into a heirarchial format (ie D2 is a child of C1 and B2; and B2 is a child of C2) Which in turn, makes my life very difficult when trying to identify a methodology to reimport (and reKey) these objects. Has anybody done anything like this? how do you do it? are there tools or documentation on how this is best accomplished? Thanks for your help.

    Read the article

  • Submenu within menu within menu ?

    - by abhishek mishra
    On pressing menu button , I have 2 options : Add & more. On click of more i have 3 options : Organize ,Export & Exit On click of Organize i want other 5 options. On click of more i get my submenu. But i want other 5 options on click of organize.How do i proceed??? My code in parts is as follows : XML file------------------------------- <item android:id="@+id/more" android:title="@string/moreMenu" android:icon="@drawable/icon"> <menu> <item android:id="@+id/Organize" android:title="@string/Organize" /> <item android:id="@+id/Export" android:title="@string/Export" /> </menu> </item> android:id="@+id/add" android:title="@string/addMenu" android:icon="@drawable/add"/ Java------------------------- package com.tcs.QuickNotes; import android.app.Activity; import android.os.Bundle; import android.view.Menu; import android.view.MenuItem; import android.widget.Toast; public class ToDoList extends Activity { Menu menu; public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.todolist); } public boolean onCreateOptionsMenu(Menu menu) { super.onCreateOptionsMenu(menu); getMenuInflater().inflate(R.layout.categorymenu, menu); return true; } public boolean onOptionsItemSelected(MenuItem item) { switch (item.getItemId()) { case R.id.more: Toast.makeText(this, "You pressed more!", Toast.LENGTH_LONG).show(); //(What needs to be done from here) return true; case R.id.add: Toast.makeText(this, "You pressed add!", Toast.LENGTH_LONG).show(); return true; } return false; } public boolean onPrepareOptionsMenu(Menu menu) { return true; } }

    Read the article

  • oracle sequence init

    - by gospodin
    I wanted to export 3 tables from db1 into db2. Before the export starts, I will create the sequences for those 3 tables. CREATE SEQUENCE TEST_SEQ START WITH 1 INCREMENT BY 1; After the export, I will reinitialize sequnce values to match the max(id) + 1 from the table. CREATE OR REPLACE PROCEDURE "TEST_SEQUENCE" AUTHID CURRENT_USER is v_num number; begin select max(ID) into v_num from TABLE_1; EXECUTE IMMEDIATE 'ALTER SEQUENCE TEST_SEQ INCREMENT BY ' || v_num; EXECUTE IMMEDIATE 'ALTER SEQUENCE 1TEST_SEQ INCREMENT BY 1'; end; / show errors; execute TEST_SEQ; This procedure compiles and executes without problems. But when I want to check t he last value of the sequence, like select TEST_SEQ.nextval from dual; I still get the "1". Can someone tell me why my procedure did not impact my sequence? ps. I am using oracle sql developper to pass sql. Thanks

    Read the article

  • Importing into a Exported object with MEF

    - by Nathan W
    I'm sorry if this question has already been asked 100 times, but I'm really struggling to get it to work. Say I have have three projects. Core.dll Has common interfaces Shell.exe Loads all modules in assembly folder. References Core.dll ModuleA.dll Exports Name, Version of module. References Core.dll Shell.exe has a [Export] that contains an single instance of a third party application that I need to inject into all loaded modules. So far the code that I have in Shell.exe is: static void Main(string[] args) { ThirdPartyApp map = new ThirdPartyApp(); var ad = new AssemblyCatalog(Assembly.GetExecutingAssembly()); var dircatalog = new DirectoryCatalog("."); var a = new AggregateCatalog(dircatalog, ad); // Not to sure what to do here. } class Test { [Export(typeof(ThirdPartyApp))] public ThirdPartyApp Instance { get; set; } [Import(typeof(IModule))] public IModule Module { get; set; } } I need to create a instance of Test, and load Instance with map from the Main method then load the Module from ModuleA.dll that is in the executing directory then [Import] Instance into the loaded module. In ModuleA I have a class like this: [Export(IModule)] class Module : IModule { [Import(ThirdPartyApp)] public ThirdPartyApp Instance {get;set;} } I know I'm half way there I just don't know how to put it all together, mainly with loading up test with a instance of map from Main. Could anyone help me with this.

    Read the article

  • How do I append Word templates to a new document in VB.NET?

    - by Tom
    I'm poking around to see if this app can be done. Basically the end user needs to create a bunch of export documents that are populated from a database. There will be numerous document templates (.dot) and the end result will be the user choosing templates x y and z to include for documentation, click a button and have the app create a new Word document, append the templates, and then populate the templates with the appropriate data. The reason it needs to be done in Word as opposed to something like Crystal Reports is that the user may customize some fields before printing the documents as it can vary from export to export. Is this possible to do through VB.NET (VS 2010)? I assume it is but I'm having difficulty tracking down a solution. Or alternatively is there a better solution? Here's what I have so far (not much I know) Import Microsoft.Office.Interop Public Class Form1 Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click Dim oWord As Word.Application Dim oDoc As Word.Document oWord = CreateObject("Word.Application") oWord.Visible = False oDoc = oWord.Documents.Add 'Open templates x.dot, y.dot, z.dot 'Append above templates to new document created 'Populate new document oWord.Visible = True End Sub End Class Thanks.

    Read the article

  • Getting batch_action functionality in Symfony 1.0

    - by Alex Ciminian
    I'm currently working on a web app written in Symfony. I'm supposed to add an "export to CSV" feature in the backend/administration part of the app for some modules. In the list view, there should be an "Export" button which should provide the user with a csv file of the elements that are displayed (considering filtering criteria). I've created a method in the actions class of the module that takes a comma separated list of ids and generates the CSV, but I'm not really sure how to add the link to it in the view. The problem is that the view doesn't exist anywhere, it's generated on the fly from the data in the generator.yml configuration file. I've posted the relevant part of the file below. I'm new to Symfony, so any help would be appreciated :). Thanks, Alex Update list: display: [id, =name, indemn, _status, _participants, _approved_, created_at] title: Lista actiuni object_actions: _edit: ~ _delete: ~ actions: _create: ~ export_csv: name: Export to CSV action: CSVExport params: id=csvActionSubmit filters: [name, county_id, _status_filter, activity_id] fields: id: name: Nr. crt. ... Thanks to your advice, I've managed to add a button that is linked to my action. The problem is that I also need to send some parameters to the action, because I may not want all the elements - filters may have been used. Unfortunately, the project is using Symfony 1.0, which doesn't support batch_actions. Currently, I'm working around this with Javascript (I parse the DOM to get the numeric ids (from the display table) and then build the link for the button. I really think there could be a better way for this.

    Read the article

  • Silverlight Business Application template with WCF is throwing warning.

    - by Manoj
    Hi, I am using the Silvelight Business Application template. I wrote a function which uses Membership.getUserList function to return the user list. I tried exposing it as Service using WCF. But when I try to compile the client side code it throws a warning saying "Client Proxy Generation for user_authentication.Web.Service1 failed'. Why does it happen? The complete warning message is: Warning 4 Client proxy generation for service 'user_authentication.Web.Service1' failed: Generating metadata files... Warning: Unable to load a service with configName 'user_authentication.Web.Service1'. To export a service provide both the assembly containing the service type and an executable with configuration for this service. Details:Either none of the assemblies passed were executables with configuration files or none of the configuration files contained services with the config name 'user_authentication.Web.Service1'. Warning: No metadata files were generated. No service contracts were exported. To export a service, use the /serviceName option. To export data contracts, specify the /dataContractOnly option. This can sometimes occur in certain security contexts, such as when the assembly is loaded over a UNC network file share. If this is the case, try copying the assembly into a trusted environment and running it.

    Read the article

  • Exporting emails from outlook programtically with vba

    - by David
    I'm using this script to export email from outlook. My question is how do I export the body of the email without the html formatting ? Sub SaveItemsToExcel() On Error GoTo ErrorHandlerExit Dim oNameSpace As Outlook.NameSpace Dim oFolder As Outlook.MAPIFolder Dim objFS As Scripting.FileSystemObject Dim objOutputFile As Scripting.TextStream Set objFS = New Scripting.FileSystemObject Set objOutputFile = objFS.OpenTextFile("C:\Temp\Export.csv", ForWriting, True) Set oNameSpace = Application.GetNamespace("MAPI") Set oFolder = oNameSpace.PickFolder If oFolder Is Nothing Then GoTo ErrorHandlerExit End If If oFolder.DefaultItemType <> olMailItem Then MsgBox "Folder does not contain mail messages" GoTo ErrorHandlerExit End If objOutputFile.WriteLine "From,Subject,Recived, Body" ProcessFolderItems oFolder, objOutputFile objOutputFile.Close Set oFolder = Nothing Set oNameSpace = Nothing Set objOutputFile = Nothing Set objFS = Nothing ErrorHandlerExit: Exit Sub End Sub Sub ProcessFolderItems(oParentFolder As Outlook.MAPIFolder, ByRef objOutputFile As Scripting.TextStream) Dim oCount As Integer Dim oMail As Outlook.MailItem Dim oFolder As Outlook.MAPIFolder oCount = oParentFolder.Items.Count For Each oMail In oParentFolder.Items If oMail.Class = olMail Then objOutputFile.WriteLine oMail.SenderEmailAddress & "," & Replace(oMail.Subject, ",", "") & "," & oMail.ReceivedTime End If Next oMail Set oMail = Nothing If (oParentFolder.Folders.Count > 0) Then For Each oFolder In oParentFolder.Folders ProcessFolderItems oFolder, objOutputFile Next End If End Sub

    Read the article

  • CompositionHost.Initialize() can not execute twice.

    - by Khoa
    I am currently try to integrate MEF and PRISM to work with each other. Everything is working fine so far. Now i would like to use MEF runtime module discovery (DeploymentCatalog) which will be used to download XAPs from server directory and then plug it into one of the Region inside my MAIN UI. I am using UnityBootStrapper and inside this class i also integrated MEF container. This sample application is based on Glenn Block (http://codebetter.com/glennblock/2010/01/03/mef-and-prism-exploration-mef-module-loading/). The following code is used to initialize CompositionContainer inside my Bootstrapper: // This is the host catalog which contains all parts of running assembly. var catalog = GetHostCatalog(); // Create MEF container which initial catalog var container = new CompositionContainer(catalog); // here we explicitly map a part to make it available on imports elsewhere, using // Unity to resolve the export so dependencies are resolved // We do this because region manager is third-party ... therefore, we need to // export explicitly because the implementation doesn't have its own [export] tag container.ComposeExportedValue<IRegionManager>(Container.Resolve<IRegionManager>()); container.ComposeExportedValue<IEventAggregator>(Container.Resolve<IEventAggregator>()); // Obtain CatalogService as a singleton // All dynamic modules will use this service to add its parts. Container.RegisterInstance<ICatalogService>(new CatalogService(catalog)); // Initialize the container CompositionHost.Initialize(container); Now i have another class called DeploymentCatalogService which is used to download the XAP from server. The current problem i am facing is that inside DeploymentCatalogService Initialize method, CompositionHost container is try to initialize its container with aggregateCatalog again. _aggregateCatalog = new AggregateCatalog(); _aggregateCatalog.Catalogs.Add(new DeploymentCatalog()); CompositionHost.Initialize(_aggregateCatalog); This causes an Exception which stated that The container has already been initialized. Is there a way to use the existing container and update it with the new aggregateCatalog? Hope this is not too confusing. Please be nice i am still new to MEF. Cheers,

    Read the article

  • Can I have the gcc linker create a static libary?

    - by Lucas Meijer
    I have a library consisting of some 300 c++ files. The program that consumes the library does not want to dynamically link to it. (For various reasons, but the best one is that some of the supported platforms do not support dynamic linking) Then I use g++ and ar to create a static library (.a), this file contains all symbols of all those files, including ones that the library doesn't want to export. I suspect linking the consuming program with this library takes an unnecessary long time, as all the .o files inside the .a still need to have their references resolved, and the linker has more symbols to process. When creating a dynamic library (.dylib / .so) you can actually use a linker, which can resolve all intra-lib symbols, and export only those that the library wants to export. The result however can only be "linked" into the consuming program at runtime. I would like to somehow get the benefits of dynamic linking, but use a static library. If my google searches are correct in thinking this is indeed not possible, I would love to understand why this is not possible, as it seems like something that many c and c++ programs could benefit from.

    Read the article

  • MEF instance management

    - by Lawrence A. Contreras
    I am working on application that has multiple modules and sub modules. Here's what I need to do: ModuleA has a sub module SubModuleA, SubModuleB, SubModuleC. I want to export ModuleA, SubModuleA, SubModuleB, SubModuleC. ModuleA can have multiple instances. Whenever I import ModuleA inside the sub modules, I want to get the correct instance and also when I import SubModuleA,SubModuleB and SubModuleC inside other classes. How can I manage that scenario using MEF? Do I really need to use MEF. Updated: Here's a quick example: public class School { List<Classroom> classRooms { get; set; } } [Export] public class Classroom { List<Teacher> teachers { get; set; } } public class Teacher { List<Student> students { get; set; } } public class Student { [Import] Classroom classroom { get; set; } } As you can see, I want to export the classroom class because I need to import it in the Student class, let's just say that I really need the classroom class inside the student class. I want to skip the part where we pass the classroom to the teacher and from the teacher we'll pass it to the student. But when I import classroom, I want to have the correct instance where that class room contains the teacher of the student.

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >