Search Results

Search found 6499 results on 260 pages for 'drop shadow'.

Page 94/260 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • Why would a terminal server be kicking users off after about 5 minutes?

    - by Matt
    I have a terminal server that was restored from an old 1U server to an IBM HS20 Blade running ESXi. The restore was using shadow protect hardware independent restore. All appeared to be functioning well but we're finding that users are being periodically kicked off after around 5 minutes or so. The interesting thing is it doesn't appear to be a simple disconnect but the whole session has gone like it was logged out. There is nothing significant in the event log, and certainly nothing from terminal services that is logged. I've also checked the client computers and from the client computers logged into our other terminal server and there wasn't any issues there. It doesn't appear to be network related. Any ideas?

    Read the article

  • Barcodes and Bugs

    - by Tim Dexter
    A great mail from Mike at Browning last week. He has been through the ringer getting his BIP barcoding sorted out but he's now out of the woods. Here's the final result. By way of explanation, an excerpt from Mike's email:   This is an example of the GS1_128 carton shipping labels we are now producing with BIP in our web application for our vendors who drop ship products to our dealers. It produces 4 labels per printed page, in PDF format, on peel & stick label paper. Each label has a unique carton number, and a unique carton serial number in the SSCC-18 barcode. This example is for Cabelas (each customer has slightly different GS1-128 label format requirements – custom template for each - a pain!). I am using custom java encoders I wrote for the UPC and SSCC-18 barcodes, and a standard encoder (code128b) for the ShipTo zip barcode. Is there any way yet to get around that SUPER ANNOYING bug when opening the rtf template in MS Word, and it replaces my xsl code text in the barcode fields with gibberish??? Every time I open it I have to re-enter all the xsl code. Not only to be able to read & edit it, but also to get it to work in BIP (BIP doesn’t like the gibberish if I upload the template that has it). Mike's last point, regarding the annoying bug in the template builder, is one that I have experienced occasionally. The development team have looked at it and found it to be an issue with MSWord and not a plugin problem. That's all well and good but how can you get around it? Well, you can take advantage of the font mapping that BIP offers to get the barcodes into the PDF output. As many of you know, getting a barcode font to appear in the PDF output, you need employ the use of the xdo.cfg file in the template builder config directory.You would normally have an entry such as this:         <font family="Code 128" style="normal" weight="normal">        <truetype path="C:\windows\fonts\128R00.TTF" />       </font>to map a barcode font to get it to render in the PDF output when testing from the template builder plugin.   Mike's issue is only present when the formfield is highlighted with a barcode font. The other fields in the template are OK. What you can do to get around the issue is to bend the config entry to get around having to use the barcode font in the template at all. Changing the entry to something like:         <font family="Calibri" style="normal" weight="normal">        <truetype path="C:\windows\fonts\128R00.TTF" />       </font>   Note that we are mapping the Calibri; a humanly readable and non 'erroring' font in the template, to the code 128 barcode font. Where you used to highlight the field with the barcode in MSWord, you now use the Calibri font instead. At run time, BIP will go look for the Calibri font mapping and will drop in the Code128 font. Of course, Calibri is an example; you need to pick a font that you are not going to use any where else in the layout.

    Read the article

  • Simple one-way synchronisation of user password list between servers

    - by Renaud Bompuis
    Using a RedHat-derivative distro (CentOS), I'd like to keep the list of regular users (UID over 500), and group (and shadow files) pushed to a backup server. The sync is only one-way, from the main server to the backup server. I don't really want to have to deal with LDAP or NIS. All I need is a simple script that can be run nightly to keep the backup server updated. The main server can SSH into the backup system. Any suggestion? Edit: Thanks for the suggestions so far but I think I didn't make myself clear enough. I'm only looking at synchronising normal users whose UID is on or above 500. System/service users (with UID below 500) may be different on both system. So you can't just sync the whole files I'm afraid.

    Read the article

  • SQL Server Date Comparison Functions

    - by HighAltitudeCoder
    A few months ago, I found myself working with a repetitive cursor that looped until the data had been manipulated enough times that it was finally correct.  The cursor was heavily dependent upon dates, every time requiring the earlier of two (or several) dates in one stored procedure, while requiring the later of two dates in another stored procedure. In short what I needed was a function that would allow me to perform the following evaluation: WHERE MAX(Date1, Date2) < @SomeDate The problem is, the MAX() function in SQL Server does not perform this functionality.  So, I set out to put these functions together.  They are titled: EarlierOf() and LaterOf(). /**********************************************************                               EarlierOf.sql   **********************************************************/ /**********************************************************   Return the later of two DATETIME variables.   Parameter 1: DATETIME1 Parameter 2: DATETIME2   Works for a variety of DATETIME or NULL values. Even though comparisons with NULL are actually indeterminate, we know conceptually that NULL is not earlier or later than any other date provided.   SYNTAX: SELECT dbo.EarlierOf('1/1/2000','12/1/2009') SELECT dbo.EarlierOf('2009-12-01 00:00:00.000','2009-12-01 00:00:00.521') SELECT dbo.EarlierOf('11/15/2000',NULL) SELECT dbo.EarlierOf(NULL,'1/15/2004') SELECT dbo.EarlierOf(NULL,NULL)   **********************************************************/ USE AdventureWorks GO   IF EXISTS       (SELECT *       FROM sysobjects       WHERE name = 'EarlierOf'       AND xtype = 'FN'       ) BEGIN             DROP FUNCTION EarlierOf END GO   CREATE FUNCTION EarlierOf (       @Date1                              DATETIME,       @Date2                              DATETIME )   RETURNS DATETIME   AS BEGIN       DECLARE @ReturnDate     DATETIME         IF (@Date1 IS NULL AND @Date2 IS NULL)       BEGIN             SET @ReturnDate = NULL             GOTO EndOfFunction       END         ELSE IF (@Date1 IS NULL AND @Date2 IS NOT NULL)       BEGIN             SET @ReturnDate = @Date2             GOTO EndOfFunction       END         ELSE IF (@Date1 IS NOT NULL AND @Date2 IS NULL)       BEGIN             SET @ReturnDate = @Date1             GOTO EndOfFunction       END         ELSE       BEGIN             SET @ReturnDate = @Date1             IF @Date2 < @Date1                   SET @ReturnDate = @Date2             GOTO EndOfFunction       END         EndOfFunction:       RETURN @ReturnDate   END -- End Function GO   ---- Set Permissions --GRANT SELECT ON EarlierOf TO UserRole1 --GRANT SELECT ON EarlierOf TO UserRole2 --GO                                                                                             The inverse of this function is only slightly different. /**********************************************************                               LaterOf.sql   **********************************************************/ /**********************************************************   Return the later of two DATETIME variables.   Parameter 1: DATETIME1 Parameter 2: DATETIME2   Works for a variety of DATETIME or NULL values. Even though comparisons with NULL are actually indeterminate, we know conceptually that NULL is not earlier or later than any other date provided.   SYNTAX: SELECT dbo.LaterOf('1/1/2000','12/1/2009') SELECT dbo.LaterOf('2009-12-01 00:00:00.000','2009-12-01 00:00:00.521') SELECT dbo.LaterOf('11/15/2000',NULL) SELECT dbo.LaterOf(NULL,'1/15/2004') SELECT dbo.LaterOf(NULL,NULL)   **********************************************************/ USE AdventureWorks GO   IF EXISTS       (SELECT *       FROM sysobjects       WHERE name = 'LaterOf'       AND xtype = 'FN'       ) BEGIN             DROP FUNCTION LaterOf END GO   CREATE FUNCTION LaterOf (       @Date1                              DATETIME,       @Date2                              DATETIME )   RETURNS DATETIME   AS BEGIN       DECLARE @ReturnDate     DATETIME         IF (@Date1 IS NULL AND @Date2 IS NULL)       BEGIN             SET @ReturnDate = NULL             GOTO EndOfFunction       END         ELSE IF (@Date1 IS NULL AND @Date2 IS NOT NULL)       BEGIN             SET @ReturnDate = @Date2             GOTO EndOfFunction       END         ELSE IF (@Date1 IS NOT NULL AND @Date2 IS NULL)       BEGIN             SET @ReturnDate = @Date1             GOTO EndOfFunction       END         ELSE       BEGIN             SET @ReturnDate = @Date1             IF @Date2 > @Date1                   SET @ReturnDate = @Date2             GOTO EndOfFunction       END         EndOfFunction:       RETURN @ReturnDate   END -- End Function GO   ---- Set Permissions --GRANT SELECT ON LaterOf TO UserRole1 --GRANT SELECT ON LaterOf TO UserRole2 --GO                                                                                             The interesting thing about this function is its simplicity and the built-in NULL handling functionality.  Its interesting, because it seems like something should already exist in SQL Server that does this.  From a different vantage point, if you create this functionality and it is easy to use (ideally, intuitively self-explanatory), you have made a successful contribution. Interesting is good.  Self-explanatory, or intuitive is FAR better.  Happy coding! Graeme

    Read the article

  • XNA RenderTarget2D Sample

    - by Michael B. McLaughlin
    I remember being scared of render targets when I first started with XNA. They seemed like weird magic and I didn’t understand them at all. There’s nothing to be frightened of, though, and they are pretty easy to learn how to use. The first thing you need to know is that when you’re drawing in XNA, you aren’t actually drawing to the screen. Instead you’re drawing to this thing called the “back buffer”. Internally, XNA maintains two sections of graphics memory. Each one is exactly the same size as the other and has all the same properties (such as surface format, whether there’s a depth buffer and/or a stencil buffer, and so on). XNA flips between these two sections of memory every update-draw cycle. So while you are drawing to one, it’s busy drawing the other one on the screen. Then the current update-draw cycle ends, it flips, and the section you were just drawing to gets drawn to the screen while the one that was being drawn to the screen before is now the one you’ll be drawing on. This is what’s meant by “double buffering”. If you drew directly to the screen, the player would see all of those draws taking place as they happened and that would look odd and not very good at all. Those two sections of graphics memory are render targets. All a render target is, is a section of graphics memory to which things can be drawn. In addition to the two that XNA maintains automatically, you can also create and set your own using RenderTarget2D and GraphicsDevice.SetRenderTarget. Using render targets lets you do all sorts of neat post-processing effects (like bloom) to make your game look cooler. It also just lets you do things like motion blur and lets you create mirrors in 3D games. There are quite a lot of things that render targets let you do. To go along with this post, I wrote up a simple sample for how to create and use a RenderTarget2D. It’s available under the terms of the Microsoft Public License and is available for download on my website here: http://www.bobtacoindustries.com/developers/utils/RenderTarget2DSample.zip . Other than the ‘using’ statements, every line is commented in detail so that it should (hopefully) be easy to follow along with and understand. If you have any questions, leave a comment here or drop me a line on Twitter. One last note. While creating the sample I came across an interesting quirk. If you start by creating a Windows Game, and then make a copy for Windows Phone 7, the drop-down that lets you choose between drawing to a WP7 device and the WP7 emulator stays grayed-out. To resolve this, you need to right click on the Windows Phone 7 version in the Solution Explorer, and choose “Set as StartUp Project”. The bar will then become active, letting you change the target you which to deploy to. If you want another version to be the one that starts up when you press F5 to start debugging, just go and right-click on that version and choose “Set as StartUp Project” for it once you’ve set the WP7 target (device or emulator) that you want.

    Read the article

  • How much effort should you put into a junior developer?

    - by Crazy Eddie
    At what point should one give up? I've tried helping them out by having them shadow me. We agree to break a minute, and then they go missing in action for a while...then just go back to their desk. Even when I know they've done this, part of me feels like I shouldn't have to go get them but that they should be showing interest in learning. Frankly, it's a bunch of time I don't have explaining things as I go when I could just do it. Am I expecting too much to expect that if they want to learn they'll make sure I know they're ready and willing? They go to meetings that they where not told they had to, good, but then sit in the corner and sleep...bad. I don't even know what to do with that. Sometimes I give them something small to do and they do it great, so I give them something just a touch harder and they totally fail, hard. Check in things without testing them. Part of me thinks that maybe I should be spending more time with them but at the same time I don't see a lot of interest and I really, honestly don't have time teaching the same things over and over. Sometimes I get asked questions that are really, really easy to answer if you just do a little bit of your own work trying to find out. Other times I'm not asked anything. I'm sure I could be doing better but honestly...I don't really want to anymore.

    Read the article

  • SQL SERVER – Finding Different ColumnName From Almost Identitical Tables

    - by pinaldave
    I have mentioned earlier on this blog that I love social media – Facebook and Twitter. I receive so many interesting questions that sometimes I wonder how come I never faced them in my real life scenario. Well, let us see one of the similar situation. Here is one of the questions which I received on my social media handle. “Pinal, I have a large database. I did not develop this database but I have inherited this database. In our database we have many tables but all the tables are in pairs. We have one archive table and one current table. Now here is interesting situation. For a while due to some reason our organization has stopped paying attention to archive data. We did not archive anything for a while. If this was not enough we  even changed the schema of current table but did not change the corresponding archive table. This is now becoming a huge huge problem. We know for sure that in current table we have added few column but we do not know which ones. Is there any way we can figure out what are the new column added in the current table and does not exist in the archive tables? We cannot use any third party tool. Would you please guide us?” Well here is the interesting example of how we can use sys.column catalogue views and get the details of the newly added column. I have previously written about EXCEPT over here which is very similar to MINUS of Oracle. In following example we are going to create two tables. One of the tables has extra column. In our resultset we will get the name of the extra column as we are comparing the catalogue view of the column name. USE AdventureWorks2012 GO CREATE TABLE ArchiveTable (ID INT, Col1 VARCHAR(10), Col2 VARCHAR(100), Col3 VARCHAR(100)); CREATE TABLE CurrentTable (ID INT, Col1 VARCHAR(10), Col2 VARCHAR(100), Col3 VARCHAR(100), ExtraCol INT); GO -- Columns in ArchiveTable but not in CurrentTable SELECT name ColumnName FROM sys.columns WHERE OBJECT_NAME(OBJECT_ID) = 'ArchiveTable' EXCEPT SELECT name ColumnName FROM sys.columns WHERE OBJECT_NAME(OBJECT_ID) = 'CurrentTable' GO -- Columns in CurrentTable but not in ArchiveTable SELECT name ColumnName FROM sys.columns WHERE OBJECT_NAME(OBJECT_ID) = 'CurrentTable' EXCEPT SELECT name ColumnName FROM sys.columns WHERE OBJECT_NAME(OBJECT_ID) = 'ArchiveTable' GO DROP TABLE ArchiveTable; DROP TABLE CurrentTable; GO The above query will return us following result. I hope this solves the problems. It is not the most elegant solution ever possible but it works. Here is the puzzle back to you – what native T-SQL solution would you have provided in this situation? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL System Table, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Split a 2D scene in layers or have a z coordinate

    - by Bane
    I am in the process of writing a 2D game engine, and a dilemma emerged. Let me explain the situation... I have a Scene class, to which various objects can be added (Drawable, ParticleEmitter, Light2D, etc), and as this is a 2D scene, things will obviously be drawn over each other. My first thought was that I could have basic add and remove methods, but I soon realized that then there would be no way for the programmer to control the order in which things were drawn. So I can up with two options, each with its pros and cons. A) Would be to split the scene in layers. By that I mean instead of having the scene be a container of objects, have it be a container of layers, which are in turn the containers of objects. B) Would require to have some kind of z-coordinate, and then have the scene sorted so objects with lower z get drawn first. Option A is pretty solid, but the problem is with the lights. In what layer do I add it? Does it work cross-layer? On all bottom layers? And I still need the Z coordinate to calculate the shadow! Option B would require me to change all my code from having Vector2D positions, to some kind of class that inherits from Vector2D and adds a z coordinate to it (I don't want it to be a Vector3D because I still need all the same methods the 2D kind has, just with .z clamped on). Am I missing something? Is there an alternative to these methods? I'm working in Javascript, if that makes a difference.

    Read the article

  • DFS-R + VSS - Run VSS at all locations or one?

    - by pbarranis
    I have 4 servers, one at each office location, each sharing read/write copies of 1.5TB of data. Changes are replicated 24/7 between all 4 servers via DFS-R. All servers run Windows 2008 R2. I'm interested in implementing VSS (Volume Shadow Services) on this data. I've read that DFS-R and VSS play nicely together, but I'm left with one unanswered question: turn on VSS at all locations or just one (the headquarters). Can I run VSS at all 4 locations safely, or is it wiser to run it at just 1 location? Thanks!

    Read the article

  • Why doesn't my IDE do background compiling/building?

    - by MKO
    Today I develop on a fairly complex computer, it has multiple cores, SSD drives and what not. Still, most of the time I'm programming the computer is leasurely doing nothing. When I need to compile and run/deploy a somewhat complex project at best it still takes a couple of seconds. Why? Now that we're living more and more in the "age of instant" why can't I press F5 in Visual studio and launch/deploy my application instantly? A couple of seconds might not sound so bad but it's still cognitive friction and time that adds up, and frankly it makes programming less fun. So how could compilation be instant? Well, People tend to edit files in different assemblies, what if Visual Studio/The IDE constantly did compilation/and building of everything that I modified anytime that it might be appropriate. Heck if they wanted to go really advanced they could do per-class compilation. The compilation might not work but then it could just silently do nothing (except adding error messages to the error window). Surely todays computer could dedicate a core or two to this task, and if someone found it annoying it could be disabled by option. I know there's probably a thousand technical issues and some fancy shadow copying that would need to be resolved for this to be seamless and practical but it sure would make programming more seamless. Is there any practical reason why this scenario isn't possible? Would the wear and tear of continually writing binaries be too much? Couldn't assemblies be held in memory until deployed/run?

    Read the article

  • Personalize your Experience with the Oracle Partner Store

    - by Kristin Rose
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} "Any sufficiently advanced technology is indistinguishable from magic." - Arthur C. Clarke So it may not be magic, but it’s definitely close!  Partners can now personalize their experience with the Oracle Partner Store and control what and who receives notifications. Let’s take a look at the MyOPS tab in Partner Store and explore more: Under “My Account” select Personal Information.  In the additional Email Addresses box, you can add in email addresses, including distribution lists that you want to be copied for every OPS notification that you receive. If you select Preferences in the drop-down you set the following options: What language you want to receive alerts in Whether or not you want to receive order confirmation and order booked alerts Your default installation country and currency Your default support offering and duration when you add items to the cart Other preferences you can select from the drop down include: Bill to Addresses Ship to Addresses Preferred Bill to/ Ship to Company information Credit Cards Please note that these settings will only affect your account in the Oracle Partner Store – they will not affect other users from your company.  For more information please refer to our training page. So take a moment to set up the Partner Store to meet your needs and save you some time. Abracadabra, Simon Davis Senior Director WW A&C Quote To Order Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • SQL SERVER – Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – T-SQL Example – Part 2 of 2

    - by pinaldave
    Yesterday I wrote a real world story of how a friend who thought they have an issue with intrusion or virus whereas the issue was really in the code. I strongly suggest you read my earlier blog post Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – Part 1 of 2 before continuing this blog post as this is second part of the first blog post. Let me reproduce the simple scenario in T-SQL. Building Sample Data USE [TestDB] GO -- Creating Table Products CREATE TABLE [dbo].[Products]( [ProductID] [int] NOT NULL, [ProductDesc] [varchar](50) NOT NULL, CONSTRAINT [PK_Products] PRIMARY KEY CLUSTERED ( [ProductID] ASC )) ON [PRIMARY] GO -- Creating Table ProductDetails CREATE TABLE [dbo].[ProductDetails]( [ProductDetailID] [int] NOT NULL, [ProductID] [int] NOT NULL, [Total] [int] NOT NULL, CONSTRAINT [PK_ProductDetails] PRIMARY KEY CLUSTERED ( [ProductDetailID] ASC )) ON [PRIMARY] GO ALTER TABLE [dbo].[ProductDetails] WITH CHECK ADD CONSTRAINT [FK_ProductDetails_Products] FOREIGN KEY([ProductID]) REFERENCES [dbo].[Products] ([ProductID]) ON UPDATE CASCADE ON DELETE CASCADE GO -- Insert Data into Table USE TestDB GO INSERT INTO Products (ProductID, ProductDesc) SELECT 1, 'Bike' UNION ALL SELECT 2, 'Car' UNION ALL SELECT 3, 'Books' GO INSERT INTO ProductDetails ([ProductDetailID],[ProductID],[Total]) SELECT 1, 1, 200 UNION ALL SELECT 2, 1, 100 UNION ALL SELECT 3, 1, 111 UNION ALL SELECT 4, 2, 200 UNION ALL SELECT 5, 3, 100 UNION ALL SELECT 6, 3, 100 UNION ALL SELECT 7, 3, 200 GO Select Data from Tables -- Selecting Data SELECT * FROM Products SELECT * FROM ProductDetails GO Delete Data from Products Table -- Deleting Data DELETE FROM Products WHERE ProductID = 1 GO Select Data from Tables Again -- Selecting Data SELECT * FROM Products SELECT * FROM ProductDetails GO Clean up Data -- Clean up DROP TABLE ProductDetails DROP TABLE Products GO My friend was confused as there was no delete was firing over ProductsDetails Table still there was a delete happening. The reason was because there is a foreign key created between Products and ProductsDetails Table with the keywords ON DELETE CASCADE. Due to ON DELETE CASCADE whenever is specified when the data from Table A is deleted and if it is referenced in another table using foreign key it will be deleted as well. Workaround 1: Design Changes – 3 Tables Change the design to have more than two tables. Create One Product Mater Table with all the products. It should historically store all the products list in it. No products should be ever removed from it. Add another table called Current Product and it should contain only the table which should be visible in the product catalogue. Another table should be called as ProductHistory table. There should be no use of CASCADE keyword among them. Workaround 2: Design Changes - Column IsVisible You can keep the same two tables. 1) Products and 2) ProductsDetails. Add a column with BIT datatype to it and name it as a IsVisible. Now change your application code to display the catalogue based on this column. There should be no need to delete anything. Workaround 3: Bad Advices (Bad advises begins here) The reason I have said bad advices because these are going to be bad advices for sure. You should make necessary design changes and not use poor workarounds which can damage the system and database integrity further. Here are the examples 1) Do not delete the data – well, this is not a real solution but can give time to implement design changes. 2) Do not have ON CASCADE DELETE – in this case, you will have entry in productsdetails which will have no corresponding product id and later on there will be lots of confusion. 3) Duplicate Data – you can have all the data of the product table move to the product details table and repeat them at each row. Now remove CASCADE code. This will let you delete the product table rows without any issue. There are so many things wrong this suggestion, that I will not even start here. (Bad advises ends here)  Well, did I miss anything? Please help me with your suggestions. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Guaranteed Restore Points as Fallback Method

    - by Mike Dietrich
    Thanks to the great audience yesterday in the Upgrade & Migration Workshop in Utrecht. That was really fun and I was amazed by our new facilities (and the  "wellness" lights surrounding the plenum room's walls). And another reason why I like to do these workshops is that often I learn new things from you So credits here to Rick van  Ek who has highlighted the following topic to me. Yesterday (and in some previous workshops) I did mention during the discussion about Fallback Strategies that you'll have to switch on Flashback Database beforehand to create a guaranteed restore point in case you'll encounter an issue during the database upgrade. I knew that we've made it possible since Oracle Database 11.2 to switch Flashback Database on without taking the database into MOUNT status (you could switch it off anyway while the database is open before in all releases). But before Oracle Database 11.2 that did require MOUNT status. SQL> create restore point rp1 guarantee flashback database ; create restore point rp1 guarantee flashback database * ERROR at line 1: ORA-38784: Cannot create restore point 'RP1'. ORA-38787: Creating the first guaranteed restore point requires mount mode when flashback database is off. But Rick did mention that I won't need to switch Flashback Database On to create a guaranteed restore point. And he's right - in older releases I would have had to go into MOUNT state to define the restore point which meant to restart the database. But in 11.2 that's no necessary anymore. And the same will apply when you upgrade your pre-11.2 database (e.g. an Oracle Database 10.2.0.4) to Oracle Database 11.2. As soon as you start your "old" not-yet-upgraded database in your 11.2 environment with STARTUP UPGRADE you can define a guaranteed restore point. If you tail the alert.log you'll see that the database will start the RVWR (Recovery Writer) background process - you'll just have to make sure that you'd define the values for db_recovery_file_dest_size and db_recovery_file_dest. SQL> startup upgrade ORACLE instance started. Total System Global Area  417546240 bytes Fixed Size                  2228944 bytes Variable Size             134221104 bytes Database Buffers          272629760 bytes Redo Buffers                8466432 bytes Database mounted. Database opened. SQL> create restore point grpt guarantee flashback database; Restore point created.SQL> drop restore point grpt; And don't forget to drop that restore point the sooner or later as it is guaranteed - and will fill up your Fast Recovery Area pretty quickly Just on the side: in any case archivelog mode is required if you'd like to work with restore points. - Mike

    Read the article

  • Ubuntu 12.04 VPS doesn't boot with mysql in nsswitch.conf

    - by chrisv
    1and1 VPS ("dynamic cloud server") does not boot any more as soon as mysql lookup is enabled in nsswitch.conf - any suggestions appreciated. Minimal setup to reproduce the problem: install Ubuntu 12.04 / LTS minimal server image install mysql-server, libnss-mysql-bg, nscd configure /etc/libnss-mysql.cfg and /etc/libnss-mysql-root.cfg set up appropriate database tables configure nss lookups through mysql in nsswitch.conf passwd: compat mysql group: compat mysql shadow: compat mysql Now, when I try to reboot the server it just hangs. No logs (maybe due to /var not yet being mounted), and I can't see console output (since this is a VPS). Booting into recovery image and removing "mysql" from /etc/nsswitch.conf makes the system bootable again, so this is definitely related to nsswitch/libnss-mysql-bg. There's a thread on gentoo-users which seems to describe a similar problem, unfortunately there's no real solution described, also the thread is rather old (from 2006) so I'm not sure whether this applies to me at all.

    Read the article

  • Deploying Oracle ADF Essentials Applications to Glassfish

    - by Shay Shmeltzer
    With the new Oracle ADF Essentials offering you can now deploy applications that leverage Oracle ADF on the open source Glassfish 3.1 server. Deployment is documented in the official JDeveloper and ADF documentation (here) but below is a summary of the steps and a video of the steps you'll need to take to get a basic Oracle ADF Essentials application to work on GlassFish. Note - to make starting/stopping GlassFish easier for my demo I used my GlassFish extension that you can get here. First we'll install some ADF Runtime libraries on GlassFish Download and install Glassfish (Note - if you also have an Oracle DB on the same machine, you'll want to switch GlassFish's HTTP port to something else instead of 8080). Download the Oracle ADF Essentials packaging - this will get you an adf_essentials.zip file. Copy the adf_essentials.zip to the lib directory of your Glassfish domain - on a default windows install this would be: C:\glassfish3\glassfish\domains\domain1\lib Go the the above lib directory and issue a unzip -j adf_essentials.zip This will extract the ADF libraries to the directory. Now you can start the Glassfish server. Now let's configure Glassfish to handle applications of the ADF type: Invoke the admin console of glassfish (http://localhost:4848) and log into your admin account. Go to Configurations->Server-config->JVM Settings and choose the JVM Options tab Add the following entries: -XX:MaxPermSize=512m (note this entry should already exist so just make sure it has a big enough value) -Doracle.mds.cache=simple While we are in the admin console, we can also define JDBC connections that will be used by our application. Go into Resources->JDBC->JDBC Connection Pools and click to create a New one Give it a name and choose the resource type to be javax.sql.XADataSource and choose Oracle as the Database Driver vendor. Click Next Scroll down to the Additional Properties section and start filling in the information for your database. The values for an Oracle XE will be (user=hr, databaseName = XE, Password=hr, ServerName=localhost, DriverType=thin, PortNumber=1521) Click Finish Click Ping to check your connection works. Now define a new JDBC Resource that will use the pool you just defined. In my example I called the resource jdbc/HRDS You will need this name to match the name in your Application Module connection configuraiton.Now you can re-start the Glassfish server for the changes to take effect. Get an ADF application going (you can use the regular Fusion Application template for this) Go into the project properties of your viewController project, under the deployment section click to edit the deployment profile that is defined there. Go to Platform and choose Glassfish 3.1 from the drop down list. Click ok to go back to your project. Go to Application -> Application Properties-> Deployment Go to Platform and choose Glassfish 3.1 from the drop down list. Click ok to go back to your project. This step will make sure that JDeveloper will autoamtically add the necessary ADF libraries to the EAR file that is being generated for deployment on Glassfish  Go to your Application->Deploy and deploy either to an EAR file or directly to a Glassfish server connection that you created. Things should just work, but if they don't then look up the server.log in the log directory and check out what error is in there. Here is a video demo of the various steps: Note - right now the deployment of an ADF application takes about 2 minutes on my machine we are hoping to be able to improve this timing in the future. People who are more familiar with Glassfish might want to explore using exploded directory deployment and see if they can get it to work.

    Read the article

  • The Connected Company: WebCenter Portal - Feedback - Analytics and Polls

    - by Michael Snow
    Evernote Export body, td { }Guest Post by: Mitchell Palski, Staff Sales Consultant The importance of connecting peers has been widely recognized and socialized as a critical component of employee intranets. Organizations are striving to provide mediums for sharing knowledge and improving awareness across their enterprise. Indirectly, the socialization of your enterprise should lead to cost savings and improved product/service quality. However, many times the direct effects of connecting an organization’s leadership with its employees are overlooked. Oracle WebCenter Portal can help you bridge that gap by gathering implicit and explicit feedback. Implicit Feedback Through Usage Analytics Analytics allows administrators to track and analyze WebCenter Portal traffic and usage. Analytics provides the following basic functionality: Usage Tracking Metrics: Analytics collects and reports metrics of common WebCenter Portal functions, including community and portlet traffic. Behavior Tracking: Analytics can be used to analyze WebCenter Portal metrics to determine usage patterns, such as page visit duration and usage over time. User Profile Correlation: Analytics can be used to correlate metric information with user profile information. Usage tracking reports can be viewed and filtered by user profile data such as country, company or title. Usage analytics help measure how users interact with website content – allowing your IT staff and business analysts to make informed decisions when planning development for your next intranet enhancement. For example: If users are not accessing your Announcements page and missing critical information that they need to be aware of, you may elect to use graphical links on the home page to direct more users to that page. As a result, the number of employee help-requests to HR decreases. If users are not accessing your News page to read recent articles, you may elect to stop spending as much time updating the page with new stories and cut costs in your communications department. You notice that there is a high volume of users accessing the Employee Dashboard page so your organization decides to continue making personalization enhancements to the page and investing in the Portal tool that most users are accessing. Usage analytics aren’t necessarily a new concept in the IT industry. What sets WebCenter Portal Analytics apart is: Reports are tailored for WebCenter specific tools Report can be easily added to a page as simple as a drag-and-drop Explicit Feedback Through Polls WebCenter Portal users can create, edit, take, and analyze online polls. With polls, you can survey your audience (such as their opinions and their experience level), check whether they can recall important information, and gather feedback and metrics. How many times have you been involved in a requirements discussion and someone has asked a question similar to “Well how do you know that no one likes our home page?” and the response is “Everyone says they hate it! That’s all anyone complains about.” No one has any measurable, quantifiable metric to gauge user satisfaction. Analytics measure usage, but your organization also needs to measure the quality of your portal as defined by the actual people that use it. With that information, your leadership can make informed decisions that will not only match usage patterns but also relate to employees on a personal level. The end result is a connection between employees and leadership that gives everyone in the organization a sense of ownership of their Portal rather than the feeling of development decisions being segregated to leadership only. Polls can be created and edited through the Poll Manager: Polls and View Poll Results can easily be added to a page through drag-and-drop. What did we learn? Being a “connected” company doesn’t just mean helping employees connect with each other horizontally across your enterprise. It also means connecting those employees to the decisions that affect their everyday activities. Through WebCenter Portal Usage Analytics and Polls, any decision that is made to remove a Portal page, update a Portal page, or develop new Portal functionality, can be justified by quantifiable metrics. Instead of fielding complaints and hearing that your employees don’t have a voice, give those employees a voice and listen!

    Read the article

  • Using e-mail address as user name for SMTP and POP3

    - by PeterMmm
    I have a exim4 setup as SMTP. My user naming schema is to name all mail users for this server as m001, m002, m003, ... and then redirect to a real e-mail address with virtual domains. How can I allow my users to authenticate with exim to send mail using either their system user name (m001) or the email address ([email protected])? User login information for m001 are stored in linux system files (passwd, shadow). They are linked thru entries in a virtual address table for each domain that this server can serve: # /etc/exim4/virtual/example.com m001: [email protected] m002: [email protected] m003: [email protected] The same can be applied to qpopper ?

    Read the article

  • T-SQL (SCD) Slowly Changing Dimension Type 2 using a merge statement

    - by AtulThakor
    Working on stored procedure recently which loads records into a data warehouse I found that the existing record was being expired using an update statement followed by an insert to add the new active record. Playing around with the merge statement you can actually expire the current record and insert a new record within one clean statement. This is how the statement works, we do the normal merge statement to insert a record when there is no match, if we match the record we update the existing record by expiring it and deactivating. At the end of the merge statement we use the output statement to output the staging values for the update,  we wrap the whole merge statement within an insert statement and add new rows for the records which we inserted. I’ve added the full script at the bottom so you can paste it and play around.   1: INSERT INTO ExampleFactUpdate 2: (PolicyID, 3: Status) 4: SELECT -- these columns are returned from the output statement 5: PolicyID, 6: Status 7: FROM 8: ( 9: -- merge statement on unique id in this case Policy_ID 10: MERGE dbo.ExampleFactUpdate dp 11: USING dbo.ExampleStag s 12: ON dp.PolicyID = s.PolicyID 13: WHEN NOT MATCHED THEN -- when we cant match the record we insert a new record record and this is all that happens 14: INSERT (PolicyID,Status) 15: VALUES (s.PolicyID, s.Status) 16: WHEN MATCHED --if it already exists 17: AND ExpiryDate IS NULL -- and the Expiry Date is null 18: THEN 19: UPDATE 20: SET 21: dp.ExpiryDate = getdate(), --we set the expiry on the existing record 22: dp.Active = 0 -- and deactivate the existing record 23: OUTPUT $Action MergeAction, s.PolicyID, s.Status -- the output statement returns a merge action which can 24: ) MergeOutput -- be insert/update/delete, on our example where a record has been updated (or expired in our case 25: WHERE -- we'll filter using a where clause 26: MergeAction = 'Update'; -- here   Complete source for example 1: if OBJECT_ID('ExampleFactUpdate') > 0 2: drop table ExampleFactUpdate 3:  4: Create Table ExampleFactUpdate( 5: ID int identity(1,1), 3: go 6: PolicyID varchar(100), 7: Status varchar(100), 8: EffectiveDate datetime default getdate(), 9: ExpiryDate datetime, 10: Active bit default 1 11: ) 12:  13:  14: insert into ExampleFactUpdate( 15: PolicyID, 16: Status) 17: select 18: 1, 19: 'Live' 20:  21: /*Create Staging Table*/ 22: if OBJECT_ID('ExampleStag') > 0 23: drop table ExampleStag 24: go 25:  26: /*Create example fact table */ 27: Create Table ExampleStag( 28: PolicyID varchar(100), 29: Status varchar(100)) 30:  31: --add some data 32: insert into ExampleStag( 33: PolicyID, 34: Status) 35: select 36: 1, 37: 'Lapsed' 38: union all 39: select 40: 2, 41: 'Quote' 42:  43: select * 44: from ExampleFactUpdate 45:  46: select * 47: from ExampleStag 48:  49:  50: INSERT INTO ExampleFactUpdate 51: (PolicyID, 52: Status) 53: SELECT -- these columns are returned from the output statement 54: PolicyID, 55: Status 56: FROM 57: ( 58: -- merge statement on unique id in this case Policy_ID 59: MERGE dbo.ExampleFactUpdate dp 60: USING dbo.ExampleStag s 61: ON dp.PolicyID = s.PolicyID 62: WHEN NOT MATCHED THEN -- when we cant match the record we insert a new record record and this is all that happens 63: INSERT (PolicyID,Status) 64: VALUES (s.PolicyID, s.Status) 65: WHEN MATCHED --if it already exists 66: AND ExpiryDate IS NULL -- and the Expiry Date is null 67: THEN 68: UPDATE 69: SET 70: dp.ExpiryDate = getdate(), --we set the expiry on the existing record 71: dp.Active = 0 -- and deactivate the existing record 72: OUTPUT $Action MergeAction, s.PolicyID, s.Status -- the output statement returns a merge action which can 73: ) MergeOutput -- be insert/update/delete, on our example where a record has been updated (or expired in our case 74: WHERE -- we'll filter using a where clause 75: MergeAction = 'Update'; -- here 76:  77:  78: select * 79: from ExampleFactUpdate 80: 

    Read the article

  • Grub does not show a Windows 8 option after dual boot

    - by skytreader
    So, I've successfully dual-booted my Windows 8 machine with Ubuntu 12.04 . However, I still don't have a convenient method of choosing what OS to load at boot time. After installing Ubuntu, my computer still loads Windows 8 directly. I then added grubx64.efi to the white list of my boot loader. But after that, my machine loads Ubuntu directly without even a shadow of GRUB showing up! I used boot-repair and I got this paste.ubuntu URL: paste.ubuntu.com/1326074. After running boot-repair (and re-white listing the grubx64.efi file), GRUB now shows up but without any Windows 8 option! Lastly, I ran sudo fdisk -l and it gave me this: WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x6396389f Device Boot Start End Blocks Id System /dev/sda1 1 1465149167 732574583+ ee GPT Partition 1 does not start on physical sector boundary. I'm guessing my problem has something to do with the warning from fdisk above but I don't know what to do with it. How do I proceed now?

    Read the article

  • USB sector 0 not fount Kingston USB DT100 G2

    - by java
    Windows constantly asks me "Foramt Disk". when i go to command prompt and type format H: /fs:ntfs or format H: /fs:fat32 response: Cannot determine the number of sectors on this volume. if the benefit DISKPART detail disk Kingston DT 100 G2 USB Device Disk ID: 00000000 Type : USB Status : Online Path : 0 Target : 0 LUN ID : 0 Location Path : UNAVAILABLE Current Read-only State : No Read-only : No Boot Disk : No Pagefile Disk : No Hibernation File Disk : No Crashdump Disk : No Clustered Disk : No DISKPART detail volume Read-only : No Hidden : No No Default Drive Letter: No Shadow Copy : No Offline : No BitLocker Encrypted : No Installable : No Volume Capacity : 0 B Volume Free Space : 0 B what the problem?

    Read the article

  • ntbackup workalike for adhoc full backups in Windows 7 thats free and preferably open source

    - by Justin Dearing
    On windows 2000 and XP machines I used to be able to do the following: ntbackup backup systemstate c: /f e:\backups\machineName\machineName-full+systemstate_200101206.bkf This gave me a full backup of the system that I could use to do a system restore, after doing a barebones OS install. Windows 7 has a great utility for regular backups with alerting and all that stuff. It does not seem to have command line support. I'd like a backup solution for my Windwos 7 systems that has the following features: Is free Is open source (preferebly) Works while the system is booted and leaves the system functional (clonezilla is great for offline backups, and I use that too) Gives me a backup that is suited for a full system restore or partial system restore (ruling out most imaging software even if they could work while the system is booted via some sort of shadow copy voodoo) Can work via the command line Compression would be nice, the ability to pipe output would be better.

    Read the article

  • How to configure ubuntu ldap client to get password policies from server?

    - by Rafaeldv
    I have a ldap server on CentOS, 389-ds. I configured the client, ubuntu 12.04, to authenticate on that base and it works very well. But it don't gets the password policies from server. For example, if i set the policy to force user to change the password on first login, ubuntu ignores it and logs him in, always. How can i setup the client to get the policies? Here are the client files: /etc/nsswitch.conf passwd: files ldap group: files ldap shadow: files ldap hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 networks: files protocols: db files services: db files ethers: db files rpc: db files netgroup: nis sudoers: ldap files common-auth auth [success=2 default=ignore] pam_unix.so nullok_secure auth [success=1 default=ignore] pam_ldap.so use_first_pass auth requisite pam_deny.so auth required pam_permit.so auth optional pam_cap.so common-account account [success=2 new_authtok_reqd=done default=ignore] pam_unix.so account [success=1 default=ignore] pam_ldap.so account requisite pam_deny.so account required pam_permit.so common-password password requisite pam_cracklib.so retry=3 minlen=8 difok=3 password [success=2 default=ignore] pam_unix.so obscure use_authtok try_first_pass sha512 password [success=1 user_unknown=ignore default=die] pam_ldap.so use_authtok try_first_pass password requisite pam_deny.so password required pam_permit.so password optional pam_gnome_keyring.so common-session session [default=1] pam_permit.so session requisite pam_deny.so session required pam_permit.so session optional pam_umask.so session required pam_unix.so session optional pam_ldap.so session optional pam_ck_connector.so nox11 session optional pam_mkhomedir.so skel=/etc/skel umask=0022 /etc/ldap.conf base dc=a,dc=b,dc=c uri ldaps://a.b.c/ ldap_version 3 rootbinddn cn=directory manager pam_password md5 sudoers_base ou=SUDOers,dc=a,dc=b,dc=c pam_lookup_policy yes pam_check_host_attr yes nss_initgroups_ignoreusers avahi,avahi-autoipd,backup,bin,colord,daemon,games,gnats,hplip,irc,kernoops,libuuid,lightdm,list,lp,mail,man,messagebus,news,proxy,pulse,root,rtkit,saned,speech-dispatcher,sshd,sync,sys,syslog,usbmux,uucp,whoopsie,www-data /etc/ldap/ldap.conf BASE dc=a,dc=b,dc=c URI ldaps://a.b.c/ ssl on use_sasl no tls_checkpeer no sudoers_base ou=SUDOers,dc=a,dc=b,dc=c sudoers_debug 2 pam_lookup_policy yes pam_check_host_attr yes pam_lookup_policy yes pam_check_host_attr yes TLS_CACERT /etc/ssl/certs/ca-certificates.crt TLS_REQCERT never

    Read the article

  • Efficiently rendering to 3D texture

    - by TravisG
    I have an existing depth texture and some other color textures, and want to process the information in them by rendering to a 3D texture (based on the depth contained in the depth texture, i.e. a point at (x/y) in the depth texture will be rendered to (x/y/texture(depth,uv)) in the 3D texture). Simply doing one manual draw call for each slice of the 3D texture (via glFramebufferTextureLayer) is terribly slow, since I don't know beforehand to what slice of the 3D texture a given texel from one of the color textures or the depth texture belongs. This means the entire process is effectively for each slice for each texel in depth texture process color textures and render to slice So I have to sample the depth texture completely per each slice, and I also have to go through the processing (at least until to discard;) for all texels in it. It would be much faster if I could rearrange the process to for each texel in depth texture figure out what slice it should end up in process color textures and render to slice Is this possible? If so, how? What I'm actually trying to do: the color textures contain lighting information (as seen from light view, it's a reflective shadow map). I want to accumulate that information in the 3D texture and then later use it to light the scene. More specifically I'm trying to implement Cryteks Light Propagation Volumes algorithm.

    Read the article

  • Why is my shadowmap all white?

    - by Berend
    I was trying out a shadowmap. But all my shadow is white. I think there is some problem with my homogeneous component. Can anybody help me? The rest of my code is written in xna Here is the hlsl code I used float4x4 xWorld; float4x4 xView; float4x4 xProjection; struct VertexToPixel { float4 Position : POSITION; float4 ScreenPos : TEXCOORD1; float Depth : TEXCOORD2; }; struct PixelToFrame { float4 Color : COLOR0; }; //------- Technique: ShadowMap -------- VertexToPixel MyVertexShader(float4 inPos: POSITION0, float3 inNormal: NORMAL0) { VertexToPixel Output = (VertexToPixel)0; float4x4 preViewProjection = mul(xView, xProjection); float4x4 preWorldViewProjection = mul(xWorld, preViewProjection); Output.Position =mul(inPos, mul(xWorld, preViewProjection)); Output.Depth = Output.Position.z / Output.Position.w; Output.ScreenPos = Output.Position; return Output; } float4 MyPixelShader(VertexToPixel PSIn) : COLOR0 { PixelToFrame Output = (PixelToFrame)0; Output.Color = PSIn.ScreenPos.z/PSIn.ScreenPos.w; return Output.Color; } technique ShadowMap { pass Pass0 { VertexShader = compile vs_2_0 MyVertexShader(); PixelShader = compile ps_2_0 MyPixelShader(); } }

    Read the article

  • With a little effort you can &ldquo;SEMI&rdquo;-protect your C# assemblies with obfuscation.

    - by mbcrump
    This method will not protect your assemblies from a experienced hacker. Everyday we see new keygens, cracks, serials being released that contain ways around copy protection from small companies. This is a simple process that will make a lot of hackers quit because so many others use nothing. If you were a thief would you pick the house that has security signs and an alarm or one that has nothing? To so begin: Obfuscation is the concealment of meaning in communication, making it confusing and harder to interpret. Lets begin by looking at the cartoon below:     You are probably familiar with the term and probably ignored this like most programmers ignore user security. Today, I’m going to show you reflection and a way to obfuscate it. Please understand that I am aware of ways around this, but I believe some security is better than no security.  In this sample program below, the code appears exactly as it does in Visual Studio. When the program runs, you get either a true or false in a console window. Sample Program. using System; using System.Diagnostics; using System.Linq;   namespace ObfuscateMe {     class Program     {                static void Main(string[] args)         {               Console.WriteLine(IsProcessOpen("notepad")); //Returns a True or False depending if you have notepad running.             Console.ReadLine();         }             public static bool IsProcessOpen(string name)         {             return Process.GetProcesses().Any(clsProcess => clsProcess.ProcessName.Contains(name));         }     } }   Pretend, that this is a commercial application. The hacker will only have the executable and maybe a few config files, etc. After reviewing the executable, he can determine if it was produced in .NET by examing the file in ILDASM or Redgate’s Reflector. We are going to examine the file using RedGate’s Reflector. Upon launch, we simply drag/drop the exe over to the application. We have the following for the Main method:   and for the IsProcessOpen method:     Without any other knowledge as to how this works, the hacker could export the exe and get vs project build or copy this code in and our application would run. Using Reflector output. using System; using System.Diagnostics; using System.Linq;   namespace ObfuscateMe {     class Program     {                static void Main(string[] args)         {               Console.WriteLine(IsProcessOpen("notepad"));             Console.ReadLine();         }             public static bool IsProcessOpen(string name)         {             return Process.GetProcesses().Any<Process>(delegate(Process clsProcess)             {                 return clsProcess.ProcessName.Contains(name);             });         }       } } The code is not identical, but returns the same value. At this point, with a little bit of effort you could prevent the hacker from reverse engineering your code so quickly by using Eazfuscator.NET. Eazfuscator.NET is just one of many programs built for this. Visual Studio ships with a community version of Dotfoscutor. So download and load Eazfuscator.NET and drag/drop your exectuable/project into the window. It will work for a few minutes depending if you have a quad-core or not. After it finishes, open the executable in RedGate Reflector and you will get the following: Main After Obfuscation IsProcessOpen Method after obfuscation: As you can see with the jumbled characters, it is not as easy as the first example. I am aware of methods around this, but it takes more effort and unless the hacker is up for the challenge, they will just pick another program. This is also helpful if you are a consultant and make clients pay a yearly license fee. This would prevent the average software developer from jumping into your security routine after you have left. I hope this article helped someone. If you have any feedback, please leave it in the comments below.

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >