Search Results

Search found 25534 results on 1022 pages for 'write powershell'.

Page 554/1022 | < Previous Page | 550 551 552 553 554 555 556 557 558 559 560 561  | Next Page >

  • Concurrent modification during backup: rsync vs dump vs tar vs ?

    - by pehrs
    I have a Linux log server where multiple applications write data. Data is written in bursts, and in a lot of different files. I need to make a backup of this mess, preferably preserving as much coherence between the file versions as possible and avoiding getting truncated files. Total amount of data on the server is about 100Gb. What I really would want (but can't) is to shut-down, backup the system cold and then start it up again. What kind of guarantees against concurrent modification does the various backup tools give? When do they "freeze" the file versions? I am looking at rsync, dump and tar at the moment, but I am open for other (open source) alternatives. Changing the application or blocking writing for backups is sadly not an option. System is not running LVM (yet), but I have considered that for rebuilding the system and then snapshots.

    Read the article

  • Storage disk format to use to be utilised by all OS

    - by ClothSword
    Aiming to have a large internal HDD to store files, and several OSes on separate disks. They all need to read and write to one storage drive. Ubuntu (13.04+) Mac OSX (10.8+) Windows (7+) What format should the drive be? Would like to avoid buying third party software, here's what I have discerned so far: NTFS - Can't be written to by Mac without buying third party software? ext3 - Windows can't read, third party software in development. Mac has OSXFUSE HFS+ - Buy third party and/or faff around to get working on windows exFAT - Cross platform, but breaks MS patents?

    Read the article

  • How to mount remote samba share from local host with multiple groups?

    - by Dragos
    I am using mount.cifs to mount a remote samba share (both client and server are Ubuntu server 8.04) like this: mount.cifs //sambaserver/samba /mountpath -o credentials=/path/.credentials,uid=someuser,gid=1000 $ cat .credentials username=user password=password I mounted a user from local system with username and password with mount.cifs but the problem is that the user is part of multiple groups on the remote system and with mount.cifs I can only specify one gid. Is there a way to specify all the gids that the remote user has? Is there a way to: Mount the remote samba with multiple groups on the local system? Browse the mount from 1) with the terminal since I want to pass some files from samba as arguments to local programs. Other solutions would be: nautilus sftp:// which runs through gvfs; but the newer gnome does not write to disk the ~/.gvfs anymore so I can't browse it in terminal. And the last solution would be NFS but that means that I have to synchronize the uids and gids on the local system with the ones from the server.

    Read the article

  • Converting a DWG/DXF to CSV or Excel

    - by Menno Gouw
    I'm using ZWcad and i need to get the coordinates of hundreds of blocks into a excel sheet or .CSV file so i can import that into the GPS hardware. I know there are plenty of tools for autocad, i probably can even write one myself but as far as ZWcad goes i seem to be out of options. However ZWcad saves to DWG too, and exports to all the other familiar cad extensions. So i was wondering if i would just save the blocks i need to export to a certain file there might be a tool/program to convert that directly into .CSV.

    Read the article

  • Designing generic render/graphics component in C++?

    - by s73v3r
    I'm trying to learn more about Component Entity systems. So I decided to write a Tetris clone. I'm using the "style" of component-entity system where the Entity is just a bag of Components, the Components are just data, a Node is a set of Components needed to accomplish something, and a System is a set of methods that operates on a Node. All of my components inherit from a basic IComponent interface. I'm trying to figure out how to design the Render/Graphics/Drawable Components. Originally, I was going to use SFML, and everything was going to be good. However, as this is an experimental system, I got the idea of being able to change out the render library at will. I thought that since the Rendering would be fairly componentized, this should be doable. However, I'm having problems figuring out how I would design a common Interface for the different types of Render Components. Should I be using C++ Template types? It seems that having the RenderComponent somehow return it's own mesh/sprite/whatever to the RenderSystem would be the simplest, but would be difficult to generalize. However, letting the RenderComponent just hold on to data about what it would render would make it hard to re-use this component for different renderable objects (background, falling piece, field of already fallen blocks, etc). I realize this is fairly over-engineered for a regular Tetris clone, but I'm trying to learn about component entity systems and making interchangeable components. It's just that rendering seems to be the hardest to split out for me.

    Read the article

  • SOA Community Newsletter March 2012

    - by JuergenKress
    Dear SOA partner community member Thank you for your excellent feedback on the brand new Patch Set 5! PS5 is a combined release of all Fusion Middleware components. We recommend you to use this version for all your upcoming projects. We are very keen to know your feedback on PS5. We request you to please send it across to us, especially if your fist customers are in production. Edwin, Roland and Demed from product management published a joint paper Start Small, Grow Fast. Please let us know if you are interested to write a joint paper. Till we meet again! Till we meet again! To read the newsletter please visit http://tinyurl.com/soanewsMarch2012 (OPN Account required) To become a member of the SOA Partner Community please register at http://www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community newsletter,SOA Community,Oracle,OPN,Jürgen Kress,WebLogic 12c,SOA Implementation Assessment,BPM Implementation Assessment,SOA Certification,SOA Specialist,BPM Certification,BPM Specialist,SOA Suite for Healthcare Integration,SOA Community Forum,SOA Specialization

    Read the article

  • fdisk -l only displays boot partition

    - by Franklin
    I have a SAN, and it's able to read and write to the 50TB RAID just fine, but when I run fdisk -l it only lists the boot partition of the SAN server, and doesn't display anything about the other partitions on the RAID. I've also tried using parted -l with the same result. Now when I type mount it shows that the partitions are mounted just fine. I've never seen this happen. The box is running Openfiler 2.3 (I know it's old, we're in the process of upgrading all our old equipment). We have another SAN that's configured almost identically, and it's able to display the partition info with either of the two commands I mentioned above.

    Read the article

  • How do i enable deflate on apache2? debian

    - by acidzombie24
    I looked at the other answers and those solutions did not help. I am completely confused how to do this. I know almost nothing about apache nor how to use it and nearly all the article say write this but doesnt tell me where. So far i have done this # a2enmod deflate Module deflate already enabled Then i tried writing this in httpd.conf AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css Then i wrote a longer version which involved deflate.c. After those didnt work i deleted them all and wrote the AddOutputFilterByType line in apache2/sites-enabled/000-default inside of I used this to test http://www.whatsmyip.org/http_compression/ everytime it says no compression. I used another site but i could never compress. Also i restart the server everytime i save a file so what could be the problem and how do i enable this properly?

    Read the article

  • When writing tests for a Wordpress plugin, should i run them inside wordpress or in a normal browser?

    - by Nicola Peluchetti
    I have started using BDD for a wordpress plugin i'm working on and i'm rewriting the js codebase to do tests. I've encountered a few problems but i'm going steady now, i was wondering if i had the right approach, because i'm writing test that should pass in a normal browser environment and not inside wordpress. I choose to do this because i want my plugin to be totally indipendent from the wordpress environment, i'm using requirejs in a way that i don't expose any globals and i'm loading my version of jQuery that doesn't override the one that ships with Wordpress. In this way my plugin would work the same on every wordpress version and my code would not break if they cheange the jQuery version or someone use my plugin on an old wordpress version. I wonder if this is the right approach or if i should always test inside the environment i'm working in. Since wordpress implies some globals i had to write some function purely for testing purpose, like "get_ajax_url": function() { if( typeof window.ajaxurl === "undefined" ) { return "http://localhost/wordpress/wp-admin/admin-ajax.php"; } else { return window.ajaxurl; } }, but apart from that i got everything working right. What do you think?

    Read the article

  • DOS Batch file to find "new" files by date

    - by Todd McArthur
    My PC has entered an infinite BSOD loop - but I do have access to a safe-mode command prompt. I'm trying to get an idea of "what changed" that might have triggered this. e.g. I might have gotten a virus, or an app update went belly up. I'd like to thus see which files were created/modified in the last few days/week or at least the *.exe, *.dll, *.com, *.bat etc. I thought I was ok with my Batch-fu but I'm stumped on how to write a quick batch file/command that would list the files for me. REM This will find the files, but the results are all muddled REM all EXE files, reverse sort by date, recursively through sub-directories dir *.exe /O-D /S What I'd really like is to find all (executable filetypes) that were created/modified in the last 3-7 days. Can anyone point me in the right direction?

    Read the article

  • TextMate - completion using an external file or file contained in project?

    - by Neil Baldwin
    Does anyone know how to get TextMate to search an external file (or even the files contained in a TextMate "project") with which to perform word completion? I'm coding some stuff on the C64 (using TextMate to write the code) and I have an external file containing labels for all of the hardware registers/kernal routines e.g VIC2InteruptStatus = $D019 It would be really handy to be able to type, say, 'VIC2I' then press the key for word completion and have TextMate find matches in the external library file. Rather than how I'm having to do it at the moment by opening the library file and copy-paste the register names into my code.

    Read the article

  • How can I pass an external instance to the constructor of an object that's being created using the default XNA XML content loader?

    - by Michael
    I'm trying to understand how to use the XNA XML content importer to instantiate non-trivial objects that are more than a collection of basic properties (e.g., a class that inherits from DrawableGameObject or GameObject and requires other things to be passed into its constructor). Is it possible to pass existing external instances (e.g., an instance of the current Game) to the constructor of an object that's being created using the default XNA XML content loader? For example, imagine that I have the following class, inheriting from DrawableGameComponent: public class Character : DrawableGameComponent { public string Name { get; set; } public Character(Game game) : base(game) { } public override void Update(GameTime gameTime) { } public override void Draw(GameTime gameTime) { } } If I had a simple class that did not need other parameters in its constructor (i.e., the Game instance), then I could simply use this XML: <XnaContent> <Asset Type="MyNamespace.Character"> <Name>John Doe</Name> </Asset> </XnaContent> ...and then create an instance of Character using this code: var character = Content.Load<Character>("MyXmlAssetName"); But that won't work because I need to pass the need to pass the Game into the constructor. What's the best way to handle this situation? Is there a way to pass in things like the current Game using the default XNA XML content loader? Do I need to write my own XML loader? (If so, how?) Is there a better object-oriented design that I should be using for my classes? Note: Although I used Game in this example, I'm really just asking how to pass any type of existing instance to my constructors. (For example, I'm using the Farseer Physics Engine, and some of my classes also need a reference to the Farseer World object too.) Thanks in advance.

    Read the article

  • How to analyse logs after the site was hacked

    - by Vasiliy Toporov
    One of our web-projects was hacked. Malefactor changed some template files in project and 1 core file of the web-framework (it's one of the famous php-frameworks). We found all corrupted files by git and reverted them. So now I need to find the weak point. With high probability we can say, that it's not the ftp or ssh password abduction. The support specialist of hosting provider (after logs analysis) said that it was the security hole in our code. My questions: 1) What tools should I use, to review access and error logs of Apache? (Our server distro is Debian). 2) Can you write tips of suspicious lines detection in logs? Maybe tutorials or primers of some useful regexps or techniques? 3) How to separate "normal user behavior" from suspicious in logs. 4) Is there any way to preventing attacks in Apache? Thanks for your help.

    Read the article

  • BizTalk 2009 - Error when Testing Map with Flat File Source Schema

    - by StuartBrierley
    I have recently been creating some flat file schemas using the BizTalk Server 2009 Flat File Schema Wizard.  I have then been mapping these flat file schemas to a "normal" xml schema format. I have not previsouly had any cause to map flat files and ran into some trouble when testing the first of these flat file maps; with an instance of the flat file as the source it threw an XSL transform error: Test Map.btm: error btm1050: XSL transform error: Unable to write output instance to the following <file:///C:\Documents and Settings\sbrierley\Local Settings\Temp\_MapData\Test Mapping\Test Map_output.xml>. Data at the root level is invalid. Line 1, position 1. Due to the complexity of the map in question I decided to created a small test map using the same source and destination schemas to see if I could pinpoint the problem.  Although the source message instance vaildated correctly against the flat file schema, when I then tested this simplified map I got the same error. After a time of fruitless head scratching and some serious google time I figured out what the problem was. Looking at the map properties I noticed that I had the test map input set to "XML" - for a flat file instance this should be set to "native".

    Read the article

  • exported variable not persisted after script execution

    - by Daniele
    I'm facing a wierd issue. I've a vm with solaris 11, and trying to write some bash scripts. if, on the shell, I type : export TEST=aaa and subsequently run: set I correctly see a new environment variable named TEST whose value is aaa. If, however I do basically the same thing in a script. when the script terminates, I do not see the variable set. To make a concrete example, if in a file test.sh I have: #!/usr/bin/bash echo 1: $TEST #variable not defined yet, expect to print only 1: echo 2: $USER TEST=sss echo 3: $TEST export TEST echo 4: $TEST it prints: 1: 2: daniele 3: sss 4: sss and after its execution, TEST is not set in the shell. Am I missing something? I tried both to do export TEST=sss and the separate variable set/export with no difference.

    Read the article

  • maintaining redirects in nginx from an external source

    - by Sascha
    I am in the situation to give our marketing department the opportunity to maintain their redirects by their own. Until now, they passed the information to the IT department and we maintained it for them in nginx.conf. Some of these guys are quite familiar with redirections in IIS or even in apache but it is no option to give them direct access to the nginx configuration. I see, that there is no nginx support for .htaccess files which I could give access to and I would also prefer not to grant write access to an conf-file that nginx includes. I expect, that our marketing will break our nginx setup within hours... Is there a secure possibility without giving them access our the heart of our load balancer?

    Read the article

  • Win 7 Explorer backup and long paths

    - by user53299
    I use Explorer to do backups because Win 7's backup program asks me to take backups previously done and to put them back in the drive. I am opposed to that idea since I believe backups should remain in storage. With Explorer backups (burn and burn to disc) I have encountered the "destination path too long" error message and it shows the name of a folder "Debug" three times. I have hundreds of folders named "Debug" thanks to Visual Studio. At this moment I'm too angry at Microsoft to write a program to determine my 3 longest paths. (Aside: This is all after coincidentally reading two articles about path junctions earlier this evening which already made me kind of unhappy.) Please, is there an easy way to continue to make backups with Explorer? Edit: I should add that renaming paths wrecks Visual Studio projects so I really need to isolate the small number of problem paths or find a cleaner solution.

    Read the article

  • Windows command line built-in compression/extraction tool?

    - by Will Marcouiller
    I need to write a batch file to unzip files to their current folder from a given root folder. Folder 0 |----- Folder 1 | |----- File1.zip | |----- File2.zip | |----- File3.zip | |----- Folder 2 | |----- File4.zip | |----- Folder 3 |----- File5.zip |----- FileN.zip So, I wish that my batch file is launched like so: ocd.bat /d="Folder 0" Then, make it iterate from within the batch file through all of the subfolders to unzip the files exactly where the .zip files are located. So here's my question: Does the Windows (from XP at least) have a command line for its embedded zip tool? Otherwise, shall I stick to another third-party util?

    Read the article

  • design for supporting entities with images

    - by brainydexter
    I have multiple entities like Hotels, Destination Cities etc which can contain images. The way I have my system setup right now is, I think of all the images belonging to this universal set (a table in the DB contains filePaths to all the images). When I have to add an image to an entity, I see if the entity exists in this universal set of images. If it exists, attach the reference to this image, else create a new image. E.g.: class ImageEntityHibernateDAO { public void addImageToEntity(IContainImage entity, String filePath, String title, String altText) { ImageEntity image = this.getImage(filePath); if (image == null) image = new ImageEntity(filePath, title, altText); getSession().beginTransaction(); entity.getImages().add(image); getSession().getTransaction().commit(); } } My question is: Earlier I had to write this code for each entity (and each entity would have a Set collection). So, instead of re-writing the same code, I created the following interface: public interface IContainImage { Set<ImageEntity> getImages(); } Entities which have image collections also implements IContainImage interface. Now, for any entity that needs to support adding Image functionality, all I have to invoke from the DAO looks something like this: // in DestinationDAO::addImageToDestination { imageDao.addImageToEntity(destination, imageFileName, imageTitle, imageAltText); // in HotelDAO::addImageToHotel { imageDao.addImageToEntity(hotel, imageFileName, imageTitle, imageAltText); It'd be great help if someone can provide me some critique on this design ? Are there any serious flaws that I'm not seeing right away ?

    Read the article

  • New website - best practice for requirements specs? [closed]

    - by Alex K.
    Possible Duplicate: Extracting user requirements from a person who does not know how to express himself As a hobby freelancer I'm new to this. I've never had a non-technical client before explain to me what his future website is supposed to do. A person wants me to make a website for him and he basically explained to me what's it about. However, he's not a technical person and he just doesn't understand what I need to know and how to properly describe/explain it to me. When I ask him how a user is supposed to submit an entry to the website he told me "He fills out a form.", which is not really helping me. This was just an example, it goes on for other sections of the website as well which are a lot harder to explain. The website will be aimed at a specific professional user demographic and I have no clue about their profession and how their industry works. I tried to find some good Product Requirements Document templates on Google but none of them really seemed like they could help him understand how to write it so I can understand what he wants/needs. Can somebody please give me a hint on how to deal with such non-technical clients?

    Read the article

  • Processing Kinect v2 Color Streams in Parallel

    - by Chris Gardner
    Originally posted on: http://geekswithblogs.net/freestylecoding/archive/2014/08/20/processing-kinect-v2-color-streams-in-parallel.aspxProcessing Kinect v2 Color Streams in Parallel I've really been enjoying being a part of the Kinect for Windows Developer's Preview. The new hardware has some really impressive capabilities. However, with great power comes great system specs. Unfortunately, my little laptop that could is not 100% up to the task; I've had to get a little creative. The most disappointing thing I've run into is that I can't always cleanly display the color camera stream in managed code. I managed to strip the code down to what I believe is the bear minimum: using( ColorFrame _ColorFrame = e.FrameReference.AcquireFrame() ) { if( null == _ColorFrame ) return;   BitmapToDisplay.Lock(); _ColorFrame.CopyConvertedFrameDataToIntPtr( BitmapToDisplay.BackBuffer, Convert.ToUInt32( BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight ), ColorImageFormat.Bgra ); BitmapToDisplay.AddDirtyRect( new Int32Rect( 0, 0, _ColorFrame.FrameDescription.Width, _ColorFrame.FrameDescription.Height ) ); BitmapToDisplay.Unlock(); } With this snippet, I'm placing the converted Bgra32 color stream directly on the BackBuffer of the WriteableBitmap. This gives me pretty smooth playback, but I still get the occasional freeze for half a second. After a bit of profiling, I discovered there were a few problems. The first problem is the size of the buffer along with the conversion on the buffer. At this time, the raw image format of the data from the Kinect is Yuy2. This is great for direct video processing. It would be ideal if I had a WriteableVideo object in WPF. However, this is not the case. Further digging led me to the real problem. It appears that the SDK is converting the input serially. Let's think about this for a second. The color camera is a 1080p camera. As we should all know, this give us a native resolution of 1920 x 1080. This produces 2,073,600 pixels. Yuy2 uses 4 bytes per 2 pixel, for a buffer size of 4,147,200 bytes. Bgra32 uses 4 bytes per pixel, for a buffer size of 8,294,400 bytes. The SDK appears to be doing this on one thread. I started wondering if I chould do this better myself. I mean, I have 8 cores in my system. Why can't I use them all? The first problem is converting a Yuy2 frame into a Bgra32 frame. It is NOT trivial. I spent a day of research of just how to do this. In the end, I didn't even produce the best algorithm possible, but it did work. After I managed to get that to work, I knew my next step was the get the conversion operation off the UI Thread. This was a simple process of throwing the work into a Task. Of course, this meant I had to marshal the final write to the WriteableBitmap back to the UI thread. Finally, I needed to vectorize the operation so I could run it safely in parallel. This was, mercifully, not quite as hard as I thought it would be. I had my loop return an index to a pair of pixels. From there, I had to tell the loop to do everything for this pair of pixels. If you're wondering why I did it for pairs of pixels, look back above at the specification for the Yuy2 format. I won't go into full detail on why each 4 bytes contains 2 pixels of information, but rest assured that there is a reason why the format is described in that way. The first working attempt at this algorithm successfully turned my poor laptop into a space heater. I very quickly brought and maintained all 8 cores up to about 97% usage. That's when I remembered that obscure option in the Task Parallel Library where you could limit the amount of parallelism used. After a little trial and error, I discovered 4 parallel tasks was enough for most cases. This yielded the follow code: private byte ClipToByte( int p_ValueToClip ) { return Convert.ToByte( ( p_ValueToClip < byte.MinValue ) ? byte.MinValue : ( ( p_ValueToClip > byte.MaxValue ) ? byte.MaxValue : p_ValueToClip ) ); }   private void ColorFrameArrived( object sender, ColorFrameArrivedEventArgs e ) { if( null == e.FrameReference ) return;   // If you do not dispose of the frame, you never get another one... using( ColorFrame _ColorFrame = e.FrameReference.AcquireFrame() ) { if( null == _ColorFrame ) return;   byte[] _InputImage = new byte[_ColorFrame.FrameDescription.LengthInPixels * _ColorFrame.FrameDescription.BytesPerPixel]; byte[] _OutputImage = new byte[BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight]; _ColorFrame.CopyRawFrameDataToArray( _InputImage );   Task.Factory.StartNew( () => { ParallelOptions _ParallelOptions = new ParallelOptions(); _ParallelOptions.MaxDegreeOfParallelism = 4;   Parallel.For( 0, Sensor.ColorFrameSource.FrameDescription.LengthInPixels / 2, _ParallelOptions, ( _Index ) => { // See http://msdn.microsoft.com/en-us/library/windows/desktop/dd206750(v=vs.85).aspx int _Y0 = _InputImage[( _Index << 2 ) + 0] - 16; int _U = _InputImage[( _Index << 2 ) + 1] - 128; int _Y1 = _InputImage[( _Index << 2 ) + 2] - 16; int _V = _InputImage[( _Index << 2 ) + 3] - 128;   byte _R = ClipToByte( ( 298 * _Y0 + 409 * _V + 128 ) >> 8 ); byte _G = ClipToByte( ( 298 * _Y0 - 100 * _U - 208 * _V + 128 ) >> 8 ); byte _B = ClipToByte( ( 298 * _Y0 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 0] = _B; _OutputImage[( _Index << 3 ) + 1] = _G; _OutputImage[( _Index << 3 ) + 2] = _R; _OutputImage[( _Index << 3 ) + 3] = 0xFF; // A   _R = ClipToByte( ( 298 * _Y1 + 409 * _V + 128 ) >> 8 ); _G = ClipToByte( ( 298 * _Y1 - 100 * _U - 208 * _V + 128 ) >> 8 ); _B = ClipToByte( ( 298 * _Y1 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 4] = _B; _OutputImage[( _Index << 3 ) + 5] = _G; _OutputImage[( _Index << 3 ) + 6] = _R; _OutputImage[( _Index << 3 ) + 7] = 0xFF; } );   Application.Current.Dispatcher.Invoke( () => { BitmapToDisplay.WritePixels( new Int32Rect( 0, 0, Sensor.ColorFrameSource.FrameDescription.Width, Sensor.ColorFrameSource.FrameDescription.Height ), _OutputImage, BitmapToDisplay.BackBufferStride, 0 ); } ); } ); } } This seemed to yield a results I wanted, but there was still the occasional stutter. This lead to what I realized was the second problem. There is a race condition between the UI Thread and me locking the WriteableBitmap so I can write the next frame. Again, I'm writing approximately 8MB to the back buffer. Then, I started thinking I could cheat. The Kinect is running at 30 frames per second. The WPF UI Thread runs at 60 frames per second. This made me not feel bad about exploiting the Composition Thread. I moved the bulk of the code from the FrameArrived handler into CompositionTarget.Rendering. Once I was in there, I polled from a frame, and rendered it if it existed. Since, in theory, I'm only killing the Composition Thread every other hit, I decided I was ok with this for cases where silky smooth video performance REALLY mattered. This ode looked like this: private byte ClipToByte( int p_ValueToClip ) { return Convert.ToByte( ( p_ValueToClip < byte.MinValue ) ? byte.MinValue : ( ( p_ValueToClip > byte.MaxValue ) ? byte.MaxValue : p_ValueToClip ) ); }   void CompositionTarget_Rendering( object sender, EventArgs e ) { using( ColorFrame _ColorFrame = FrameReader.AcquireLatestFrame() ) { if( null == _ColorFrame ) return;   byte[] _InputImage = new byte[_ColorFrame.FrameDescription.LengthInPixels * _ColorFrame.FrameDescription.BytesPerPixel]; byte[] _OutputImage = new byte[BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight]; _ColorFrame.CopyRawFrameDataToArray( _InputImage );   ParallelOptions _ParallelOptions = new ParallelOptions(); _ParallelOptions.MaxDegreeOfParallelism = 4;   Parallel.For( 0, Sensor.ColorFrameSource.FrameDescription.LengthInPixels / 2, _ParallelOptions, ( _Index ) => { // See http://msdn.microsoft.com/en-us/library/windows/desktop/dd206750(v=vs.85).aspx int _Y0 = _InputImage[( _Index << 2 ) + 0] - 16; int _U = _InputImage[( _Index << 2 ) + 1] - 128; int _Y1 = _InputImage[( _Index << 2 ) + 2] - 16; int _V = _InputImage[( _Index << 2 ) + 3] - 128;   byte _R = ClipToByte( ( 298 * _Y0 + 409 * _V + 128 ) >> 8 ); byte _G = ClipToByte( ( 298 * _Y0 - 100 * _U - 208 * _V + 128 ) >> 8 ); byte _B = ClipToByte( ( 298 * _Y0 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 0] = _B; _OutputImage[( _Index << 3 ) + 1] = _G; _OutputImage[( _Index << 3 ) + 2] = _R; _OutputImage[( _Index << 3 ) + 3] = 0xFF; // A   _R = ClipToByte( ( 298 * _Y1 + 409 * _V + 128 ) >> 8 ); _G = ClipToByte( ( 298 * _Y1 - 100 * _U - 208 * _V + 128 ) >> 8 ); _B = ClipToByte( ( 298 * _Y1 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 4] = _B; _OutputImage[( _Index << 3 ) + 5] = _G; _OutputImage[( _Index << 3 ) + 6] = _R; _OutputImage[( _Index << 3 ) + 7] = 0xFF; } );   BitmapToDisplay.WritePixels( new Int32Rect( 0, 0, Sensor.ColorFrameSource.FrameDescription.Width, Sensor.ColorFrameSource.FrameDescription.Height ), _OutputImage, BitmapToDisplay.BackBufferStride, 0 ); } }

    Read the article

  • Debian pure-ftpd, Restrict access

    - by durduvakis
    I am running Debian Wheezy, with ISPConfig 3, plus ModSecurity and I would like to restrict access to ftp to specific IP(s) globally (not to specific ftp users only), that can be either 127.0.0.1 or one I would manually add later. I would also like to completely disable ftp access from the web, but allow only from ftp-client software (if that is possible). The idea of closing firewall ports is not what I want. I know I can do this setting some firewall rule though, but that is not what I currently need. I have managed to do this for example on phpmyadmin inside it's .conf file, but unfortunately I cannot find any configuration to alter for pure-ftpd in my system. Restricting web-ftp access maybe possible by adding some rule in apache2 conf, but I am not sure how to write such a rule. Thanks to everyone that can help cheers

    Read the article

  • How safe is it to rely on thirdparty Python libs in a production product?

    - by skyler
    I'm new to Python and come from the write-everything-yourself world of PHP (at least this is how I always approached it). I'm using Flask, WTForms, Jinja2, and I've just discovered Flask-Login which I want to use. My question is about the reliability of using thirdparty libraries for core functionality in a project that is planned to be around for several years. I've installed these libraries (via pip) into a virtualenv environment. What happens if these libraries stop being distributed? Should I back up these libraries (are they eggs)? Can I store these libraries in my project itself, instead of relying on pip to install them in a virtualenv? And should I store these separately? I'm worried that I'll rely on a library for core functionality, and then one day I'll download an incompatible version through pip, or the author or maintainer will stop distributing it and it'll no longer be available. How can I protect against this, and ensure that any thirdparty libraries that I use in my projects will always be available as they are now?

    Read the article

  • SSD install - what do I need to watch out for when reconfiguring SATA ports?

    - by tim11g
    I installed a Samsung 840 SSD in a Windows 7 machine. It seems to be working fine, but I'm not seeing the expected performance. The AS SSD benchmark gives 76 for read and 138 for write. At the upper left of the benchmark it says "pciide - BAD" and "31K - BAD". I'm assuming the "pciide BAD" means the motherboard (Gigabyte GA-P35-DS4) is configured as IDE emulation and needs to change to native SATA. I don't know what the "31K" refers to. The bios settings look like this: I saw this article that indicates that changing the SATA mode of the boot drive can cause problems (Blue Screen): Error message occurs after you change the SATA mode of the boot drive What is the correct procedure to change the SATA Mode without causing a system failure? Apply the registry change from the MSFT article above first, then reboot and change the SATA mode? Will the SATA mode change in the BIOS affect other drives?

    Read the article

  • How to view/mount other partitions on your hard drive

    - by Preston Zacharias
    Recently I have installed Ubuntu 12.04 Beta 2 on a USB flash drive and decided to install it on an old external HDD which I have taken out of the casing and succesfully mounted in my desktop computer. There is no other operating system besides the newly install Ubuntu. However, there is about 500gb of data on the drive. This is why i used a partitioning software on my windows 7 netbook to partition the hard drive to set aside 1tb for files, 350gb of space for linux and the remaining 650gb for Vista which i plan on installing soon. But this is where the problem sets in...when installing Ubuntu it does not recognize that the drive is partitioned at all, it's just one big open block of space...so I used the installers built in partitioning feature to set aside 300gb for main Ubuntu install and 50gb for swap space. I set both of these partitions to be created at the "end" so that it wouldn't delete or write over my data. And this is where i am really lost; when booting into Ubuntu i am able to use it perfectly fine, got on internet, etc...but i have NO CLUE as to how i can view files that were previously on the drive (all of my data that i had prior to install). How can I mount/be able to view the other partition so that i can have access to my data? Thank you ahead of time! I REALLY appreciate any help or advice! ~Preston

    Read the article

< Previous Page | 550 551 552 553 554 555 556 557 558 559 560 561  | Next Page >