Search Results

Search found 1721 results on 69 pages for 'dr evil'.

Page 14/69 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • How to authorize standard users to install drivers

    - by Dr I
    I'm currently looking for a way to autorize my non administrators users to perform an installation of drivers. Here is the speech: All my users are standard users, they got a VirtualBox Hypervisor if they need the administrator rights. But if they put an USB device on the local machine and try to redirect the device to the Virtual Machine, Windows ask for some Administrator rights. I've try to set up those GPO: -Allow standard users to install drivers. -Install WHQL Drivers: Allow Silently. I don't know how to do this.

    Read the article

  • How can I start an X11 session on my headless Fedora 13 server?

    - by DR
    I have a small home server using Fedora 13 as operating system. Since the upgrade to Fedora 13 I cannot start the X11 server. (I need it to use vnc) When I try to start the server, both the nouveau and the original NVIDIA driver claim that there's no physical monitor attached (which is true) and that the X server cannot start because the initial display modes cannot be probed. I tried to manually add the display modes to xorg.conf but they seem to be ignored. Some forums suggest to simply use the VESA driver in this situation, but since I can't get it run either (different, more obscure message) I want to get it working with the nouveau driver as a matter of principle. Temporarily attaching a monitor would mean a lot of work for me ( 1 hour and currently it's almost 35°C/95°F in my home) I want to try that only if that definitely fixes the problem and stays that way if I remove the monitor again) How can I make the driver work without having a physical monitor attached? Thank you for your time and your help!

    Read the article

  • Truecrypt system partion partially encryted but then drive corrupted and won't boot - decrypting so I can access files to backup?

    - by Dr.Seuss
    So I was attempting to encrypt my (Windows 7) system drive with Truecrypt and it stopped at around 15% and said that there was a segment error and that it could not proceed until it was fixed. So, I restarted the computer and ran HDD Regenerator which subsequently fixed the bad sectors on the drive, but now my system cannot boot. So, I run a number of recovery disks to no avail (Windows repair is unable to fix) and the drive won't mount on a linux version run from a CD because the drive is encrypted. So I tried mounting the drive using Truecrypt under the Linux distribution on the disk and selected "Mount partition using system encryption without pre-boot authentication" so I can decrypt, but I get an error message about it only being possible once the entire system is encrypted. How do I get out of this mess? I need to be able to back up the data that's on that partially encrypted drive so I can reinstall my operating system.

    Read the article

  • Understanding Unix Permissions (w/ ACL)

    - by Dr. DOT
    I am trying to set permissions on my server properly. Currently I have a number of directories and files chmod'd at 0777 -- but I am not comfortable with it being this way. So at the advice of a serverfault specialist, I had my hosting provider install ACL on my shared virtual server. When I FTP to the server as my FTP user account "abc", I can do everything I need to do (and rightfully so) because all my dirs and files are owned by "abc", the group is "abc", and the 1st octet is set to 7 (rwx). That much I get. But here's where it gets dark gray for me. PHP is set to user "nobody". so when someone browses on of my web pages that either ends in .php or has some embedded PHP, I assume the last octet controls the access. Because all my dirs and files are owned by "abc" and assigned to group "abc", if the last octet was a 4 (r--) then the server would let the browser read the file. If it were a 6 (rw-) then the server would let the browser also write to the file or directory, correct? what if the web document does not end in .php or does not have any PHP embedded? What is the user then? how can I use ACL to not set the permission to 6 (rw-) or even 7 (rwx)? [not sure what execute does or means] Just looking for some sort of policy settings to best lock down my dirs and files while allowing my PHP scripts to do uploads and write to files (so my users don't call me to tell me "permission denied". Ok, thanks to anyone out there willing to lend me a hand. It is greatly appreciated.

    Read the article

  • Apache mod_wsgi elegant clustering method

    - by Dr I
    I'm currently trying to build a scalable infrastructure for my Python webservers. Actually, I'm trying to find the most elegant way to build a scalable cluster to host all my Python WebServices. For now, I'm using three servers like this: 1 x PuppetMaster to deploy my servers. 2 x Apache Reverse Proxy Front-end servers. 1 x Apache HTTPd Server which host the Python WSGI Applications and binded to using mod_wsgi. 4 x MongoDB Clustered server. Everything is OK concerning the Reverse proxy and the DB Backend, I'm able to easily add a new Reverse Proxy and a new DB Node, but my problem is about the Python WebServer. I thinked to just provision a new node with exactly the same configuration and a rsync replication between the two nodes, but It's not really usefull in term of deployement for my developpers etc. So if you have a solution which is as efficient and elegant that the Tomcat Cluster I'll be really happy to ear it ;-)

    Read the article

  • T-SQL Tuesday #33: Trick Shots: Undocumented, Underdocumented, and Unknown Conspiracies!

    - by Most Valuable Yak (Rob Volk)
    Mike Fal (b | t) is hosting this month's T-SQL Tuesday on Trick Shots.  I love this choice because I've been preoccupied with sneaky/tricky/evil SQL Server stuff for a long time and have been presenting on it for the past year.  Mike's directives were "Show us a cool trick or process you developed…It doesn’t have to be useful", which most of my blogging definitely fits, and "Tell us what you learned from this trick…tell us how it gave you insight in to how SQL Server works", which is definitely a new concept.  I've done a lot of reading and watching on SQL Server Internals and even attended training, but sometimes I need to go explore on my own, using my own tools and techniques.  It's an itch I get every few months, and, well, it sure beats workin'. I've found some people to be intimidated by SQL Server's internals, and I'll admit there are A LOT of internals to keep track of, but there are tons of excellent resources that clearly document most of them, and show how knowing even the basics of internals can dramatically improve your database's performance.  It may seem like rocket science, or even brain surgery, but you don't have to be a genius to understand it. Although being an "evil genius" can help you learn some things they haven't told you about. ;) This blog post isn't a traditional "deep dive" into internals, it's more of an approach to find out how a program works.  It utilizes an extremely handy tool from an even more extremely handy suite of tools, Sysinternals.  I'm not the only one who finds Sysinternals useful for SQL Server: Argenis Fernandez (b | t), Microsoft employee and former T-SQL Tuesday host, has an excellent presentation on how to troubleshoot SQL Server using Sysinternals, and I highly recommend it.  Argenis didn't cover the Strings.exe utility, but I'll be using it to "hack" the SQL Server executable (DLL and EXE) files. Please note that I'm not promoting software piracy or applying these techniques to attack SQL Server via internal knowledge. This is strictly educational and doesn't reveal any proprietary Microsoft information.  And since Argenis works for Microsoft and demonstrated Sysinternals with SQL Server, I'll just let him take the blame for it. :P (The truth is I've used Strings.exe on SQL Server before I ever met Argenis.) Once you download and install Strings.exe you can run it from the command line.  For our purposes we'll want to run this in the Binn folder of your SQL Server instance (I'm referencing SQL Server 2012 RTM): cd "C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn" C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn> strings *sql*.dll > sqldll.txt C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn> strings *sql*.exe > sqlexe.txt   I've limited myself to DLLs and EXEs that have "sql" in their names.  There are quite a few more but I haven't examined them in any detail. (Homework assignment for you!) If you run this yourself you'll get 2 text files, one with all the extracted strings from every SQL DLL file, and the other with the SQL EXE strings.  You can open these in Notepad, but you're better off using Notepad++, EditPad, Emacs, Vim or another more powerful text editor, as these will be several megabytes in size. And when you do open it…you'll find…a TON of gibberish.  (If you think that's bad, just try opening the raw DLL or EXE file in Notepad.  And by the way, don't do this in production, or even on a running instance of SQL Server.)  Even if you don't clean up the file, you can still use your editor's search function to find a keyword like "SELECT" or some other item you expect to be there.  As dumb as this sounds, I sometimes spend my lunch break just scanning the raw text for anything interesting.  I'm boring like that. Sometimes though, having these files available can lead to some incredible learning experiences.  For me the most recent time was after reading Joe Sack's post on non-parallel plan reasons.  He mentions a new SQL Server 2012 execution plan element called NonParallelPlanReason, and demonstrates a query that generates "MaxDOPSetToOne".  Joe (formerly on the Microsoft SQL Server product team, so he knows this stuff) mentioned that this new element was not currently documented and tried a few more examples to see what other reasons could be generated. Since I'd already run Strings.exe on the SQL Server DLLs and EXE files, it was easy to run grep/find/findstr for MaxDOPSetToOne on those extracts.  Once I found which files it belonged to (sqlmin.dll) I opened the text to see if the other reasons were listed.  As you can see in my comment on Joe's blog, there were about 20 additional non-parallel reasons.  And while it's not "documentation" of this underdocumented feature, the names are pretty self-explanatory about what can prevent parallel processing. I especially like the ones about cursors – more ammo! - and am curious about the PDW compilation and Cloud DB replication reasons. One reason completely stumped me: NoParallelHekatonPlan.  What the heck is a hekaton?  Google and Wikipedia were vague, and the top results were not in English.  I found one reference to Greek, stating "hekaton" can be translated as "hundredfold"; with a little more Wikipedia-ing this leads to hecto, the prefix for "one hundred" as a unit of measure.  I'm not sure why Microsoft chose hekaton for such a plan name, but having already learned some Greek I figured I might as well dig some more in the DLL text for hekaton.  Here's what I found: hekaton_slow_param_passing Occurs when a Hekaton procedure call dispatch goes to slow parameter passing code path The reason why Hekaton parameter passing code took the slow code path hekaton_slow_param_pass_reason sp_deploy_hekaton_database sp_undeploy_hekaton_database sp_drop_hekaton_database sp_checkpoint_hekaton_database sp_restore_hekaton_database e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\hkproc.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\matgen.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\matquery.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\sqlmeta.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\resultset.cpp Interesting!  The first 4 entries (in red) mention parameters and "slow code".  Could this be the foundation of the mythical DBCC RUNFASTER command?  Have I been passing my parameters the slow way all this time? And what about those sp_xxxx_hekaton_database procedures (in blue)? Could THEY be the secret to a faster SQL Server? Could they promise a "hundredfold" improvement in performance?  Are these special, super-undocumented DIB (databases in black)? I decided to look in the SQL Server system views for any objects with hekaton in the name, or references to them, in hopes of discovering some new code that would answer all my questions: SELECT name FROM sys.all_objects WHERE name LIKE '%hekaton%' SELECT name FROM sys.all_objects WHERE object_definition(OBJECT_ID) LIKE '%hekaton%' Which revealed: name ------------------------ (0 row(s) affected) name ------------------------ sp_createstats sp_recompile sp_updatestats (3 row(s) affected)   Hmm.  Well that didn't find much.  Looks like these procedures are seriously undocumented, unknown, perhaps forbidden knowledge. Maybe a part of some unspeakable evil? (No, I'm not paranoid, I just like mysteries and thought that punching this up with that kind of thing might keep you reading.  I know I'd fall asleep without it.) OK, so let's check out those 3 procedures and see what they reveal when I search for "Hekaton": sp_createstats: -- filter out local temp tables, Hekaton tables, and tables for which current user has no permissions -- Note that OBJECTPROPERTY returns NULL on type="IT" tables, thus we only call it on type='U' tables   OK, that's interesting, let's go looking down a little further: ((@table_type<>'U') or (0 = OBJECTPROPERTY(@table_id, 'TableIsInMemory'))) and -- Hekaton table   Wellllll, that tells us a few new things: There's such a thing as Hekaton tables (UPDATE: I'm not the only one to have found them!) They are not standard user tables and probably not in memory UPDATE: I misinterpreted this because I didn't read all the code when I wrote this blog post. The OBJECTPROPERTY function has an undocumented TableIsInMemory option Let's check out sp_recompile: -- (3) Must not be a Hekaton procedure.   And once again go a little further: if (ObjectProperty(@objid, 'IsExecuted') <> 0 AND ObjectProperty(@objid, 'IsInlineFunction') = 0 AND ObjectProperty(@objid, 'IsView') = 0 AND -- Hekaton procedure cannot be recompiled -- Make them go through schema version bumping branch, which will fail ObjectProperty(@objid, 'ExecIsCompiledProc') = 0)   And now we learn that hekaton procedures also exist, they can't be recompiled, there's a "schema version bumping branch" somewhere, and OBJECTPROPERTY has another undocumented option, ExecIsCompiledProc.  (If you experiment with this you'll find this option returns null, I think it only works when called from a system object.) This is neat! Sadly sp_updatestats doesn't reveal anything new, the comments about hekaton are the same as sp_createstats.  But we've ALSO discovered undocumented features for the OBJECTPROPERTY function, which we can now search for: SELECT name, object_definition(OBJECT_ID) FROM sys.all_objects WHERE object_definition(OBJECT_ID) LIKE '%OBJECTPROPERTY(%'   I'll leave that to you as more homework.  I should add that searching the system procedures was recommended long ago by the late, great Ken Henderson, in his Guru's Guide books, as a great way to find undocumented features.  That seems to be really good advice! Now if you're a programmer/hacker, you've probably been drooling over the last 5 entries for hekaton (in green), because these are the names of source code files for SQL Server!  Does this mean we can access the source code for SQL Server?  As The Oracle suggested to Neo, can we return to The Source??? Actually, no. Well, maybe a little bit.  While you won't get the actual source code from the compiled DLL and EXE files, you'll get references to source files, debugging symbols, variables and module names, error messages, and even the startup flags for SQL Server.  And if you search for "DBCC" or "CHECKDB" you'll find a really nice section listing all the DBCC commands, including the undocumented ones.  Granted those are pretty easy to find online, but you may be surprised what those web sites DIDN'T tell you! (And neither will I, go look for yourself!)  And as we saw earlier, you'll also find execution plan elements, query processing rules, and who knows what else.  It's also instructive to see how Microsoft organizes their source directories, how various components (storage engine, query processor, Full Text, AlwaysOn/HADR) are split into smaller modules. There are over 2000 source file references, go do some exploring! So what did we learn?  We can pull strings out of executable files, search them for known items, browse them for unknown items, and use the results to examine internal code to learn even more things about SQL Server.  We've even learned how to use command-line utilities!  We are now 1337 h4X0rz!  (Not really.  I hate that leetspeak crap.) Although, I must confess I might've gone too far with the "conspiracy" part of this post.  I apologize for that, it's just my overactive imagination.  There's really no hidden agenda or conspiracy regarding SQL Server internals.  It's not The Matrix.  It's not like you'd find anything like that in there: Attach Matrix Database DM_MATRIX_COMM_PIPELINES MATRIXXACTPARTICIPANTS dm_matrix_agents   Alright, enough of this paranoid ranting!  Microsoft are not really evil!  It's not like they're The Borg from Star Trek: ALTER FEDERATION DROP ALTER FEDERATION SPLIT DROP FEDERATION   #tsql2sday

    Read the article

  • DIY Carbonator Creates Pop Rocks Like Fizzy Fruit [Science]

    - by Jason Fitzpatrick
    If you’ve ever sat around wishing that scientists would stop wasting time trying to solve pressing global problems and instead genetically engineer a bizarre but delicious hybrid of Pop Rocks candy and wholesome fruit, this mad scientist experiment is for you. Over at Evil Mad Scientist Laboratories they share a really fun weekend project. Contributor Rich Faulhaber was looking for a way to make eating fruit extra fun and science-infused for his kids. His solution? Build a homemade carbon dioxide injector that infuses fruit with carbonation. Having trouble imagining that? Envision a bowl of strawberries where every strawberry burst into a crazy flurry of strawberry flavor and champagne bubbles every time you bit into it. Fizzy fruit! Hit up the link below to see how he took pretty common parts: a C02 tank from a paint ball gun, a water filter canister from the hardware store, and other cheap and readily available parts (with the exception of the gas regulator which he suggests you shop garage sales and surplus stores to find a deal on), and combined them together to create a C02 fruit infuser. Hit up the link below to read more about his setup and the procedure he uses to infuse fruit with carbonation. The C02inator [Evil Mad Scientist Laboratories via Hack a Day] HTG Explains: What Are Character Encodings and How Do They Differ?How To Make Disposable Sleeves for Your In-Ear MonitorsMacs Don’t Make You Creative! So Why Do Artists Really Love Apple?

    Read the article

  • How can player actions be "judged morally" in a measurable way?

    - by Sebastien Diot
    While measuring the player "skills" and "effort" is usually easy, adding some "less objective" statistics can give the player supplementary goals, especially in a MUD/RPG context. What I mean is that apart from counting how many orcs were killed, and gems collected, it would be interesting to have something along the line of the traditional Good/Evil, Lawful/Chaotic ranking of paper-based RPG, to add "dimension" to the game. But computers cannot differentiate good/evil effectively (nor can humans in many cases), and if you have a set of "laws" which are precise enough that you can tell exactly when the player breaks them, then it generally makes more sense to actually prevent them from doing that action in the first place. One example could be the creation/destruction axis (if players are at all allowed to create/build things), possibly in the form of the general effect of the player actions on "ecology". So what else is there left that can be effectively measured and would provide a sense of "moral" for the player? The more axis I have to measure, the more goals the player can have, and therefore the longer the game can last. This also gives the players more ways of "differentiating" themselves among hordes of other players of the same "class" and similar "kit".

    Read the article

  • Can you configure multiple KMS hosts in a primary / secondary relationship?

    - by Mark Hall
    We have two datacenters in our environment: primary and DR. I need to deploy a KMS service, and to be proactive, I would like to have a host in both datacenters. From what I have read, you can have up to 6 hosts without calling Microsoft, and it appears that what will happen is that a SRV record for each host will be placed in DNS. The client will query for those SRV records and randomly choose a host for the initial activation and will use that same server for all renewals. The server can be changed manually through a script and will automatically change if the initial server is unavailable when activating or renewing. My question is has anyone found a way to designate one server as the primary KMS host and designate the other as failover only? The reason I ask is that it is preferred that the client communicate with the primary datacenter during normal operations and only talk to the DR datacenter when needed because the bandwidth between the offices and the DR datacenter is limited compared to the primary. I am sure that this has been done before but I can not find it MSFT's documentation. Thanks, Mark

    Read the article

  • ESX 3.5 refuses to update

    - by Speeddymon
    I have a set of ESX 3.5 servers in 2 different datacenters. One is DR, one is production. They are on the same vlan and so I can access any of them on the private network from my vCenter server. Last month, as a learning experience (I hadn't dealt with ESX much before), I updated the DR server. Other than finding out that a couple of bundles had to be installed manually in order to get the rest to install from vCenter, it went off without a hitch. Now, I'm trying to do the same for our production servers and it is not working. I've googled around for the error I get during scan, and investigate loads of different solutions (editing the integrity file, checking DNS, etc) -- I did install the 2 bundles that had to be installed manually already -- but scan from vCenter is just not working. Side note: I did just scan the DR server again and that scan works fine so shouldn't be a problem with vCenter that has cropped up recently -- it has to be something else. The error I get is: Patch metadata for (servername) missing. Please download updates metadata first. Failed to scan (servername) for updates. I'm all out of ideas on how to make this work, so any help would be hugely appreciated.

    Read the article

  • asp.net gridview

    - by arjun
    I have a gridview which has bound fields and a template field for checkbox.I wrote a code for deletion of records as per checking checkboxes.My problem is HtmlInputCheckBox chk; foreach(GridViewRow dr in dgvdetails.Rows) { chk = (HtmlInputCheckBox)dr.FindControl("ch"); chk.Checked = true; if (chk.Checked)/// **here checkbox is not checked even if I'm check it** { pl.id = int.Parse(chk.Value); bl.deletedgvdetails(pl); } }

    Read the article

  • Global Thermonuclear War [closed]

    - by Vivin Paliath
    Hey there, I'm Dr. Falken and I'm trying to make a computer program on my computer (WOPR) that simulates Global Thermonuclear War. So far I've simulated Checkers and Tic-Tac-Toe, but I've never tried to do anything on this scale. Any pointers on how I should start? Sincerely, Dr. Falken

    Read the article

  • ASP.Net check value with DBNULL

    - by c11ada
    hey all, i have the following code foreach (DataRowView dr in Data) { if (dr == System.DBNull.Value) { nedID = 1; } } but i get the following error Operator '==' cannot be applied to operands of type 'System.Data.DataRowView' and 'System.DBNull' please can some one advice me on how i can check if the value is null or DBNULL

    Read the article

  • ASP.Net Error - Unable to cast object of type 'System.String' to type 'System.Data.DataTable'.

    - by xtrabits
    I get the below error Unable to cast object of type 'System.String' to type 'System.Data.DataTable'. This is the code I'm using Dim str As String = String.Empty If (Session("Brief") IsNot Nothing) Then Dim dt As DataTable = Session("Brief") If (dt.Rows.Count > 0) Then For Each dr As DataRow In dt.Rows If (str.Length > 0) Then str += "," str += dr("talentID").ToString() Next End If End If Return str Thanks

    Read the article

  • How can I solve out of memory exception in generic list generic ?

    - by Phsika
    How can i solve out of memory exception in list generic if adding new value foreach(DataColumn dc in dTable.Columns) foreach (DataRow dr in dTable.Rows) myScriptCellsCount.MyCellsCharactersCount.Add(dr[dc].ToString().Length); MyBase Class: public class MyExcelSheetsCells { public List<int> MyCellsCharactersCount { get; set; } public MyExcelSheetsCells() { MyCellsCharactersCount = new List<int>(); } }

    Read the article

  • Adding a GridViewRowCollection to the asp.net GridView

    - by CoffeeCode
    i have a record of a gridviewrowcollection. and i'm having issues with adding them to the grid. GridViewRowCollection dr = new GridViewRowCollection(list); StatisticsGrid.DataSource = dr; doesnt work. StatisticsGrid.Rows does have an add method, what is strange how can i add a gridviewrowcollection without creating a datatable + binding it to the datasource?? thanks in advance

    Read the article

  • How can i solve out of Exception error in list generic ?

    - by Phsika
    How can i solve out of memory exception in list generic if adding new value foreach(DataColumn dc in dTable.Columns) foreach (DataRow dr in dTable.Rows) myScriptCellsCount.MyCellsCharactersCount.Add(dr[dc].ToString().Length); MyBase Class: public class MyExcelSheetsCells { public List<int> MyCellsCharactersCount { get; set; } public MyExcelSheetsCells() { MyCellsCharactersCount = new List<int>(); } }

    Read the article

  • Merge Mutliple Excel Workbooks

    - by IRHM
    I wonder whether someone may be able to help me please. I'm trying to use the code below to allow the user to select multiple Excel Workbooks, amalgamating the data into one 'Summary' sheet. Sub Merge() Dim DestWB As Workbook, WB As Workbook, WS As Worksheet, SourceSheet As String Set DestWB = ActiveWorkbook SourceSheet = "Input" startrow = 7 FileNames = Application.GetOpenFilename( _ filefilter:="Excel Files (*.xls*),*.xls*", _ Title:="Select the workbooks to merge.", MultiSelect:=True) If IsArray(FileNames) = False Then If FileNames = False Then Exit Sub End If End If For n = LBound(FileNames) To UBound(FileNames) Set WB = Workbooks.Open(Filename:=FileNames(n), ReadOnly:=True) For Each WS In WB.Worksheets If WS.Name = SourceSheet Then With WS If .UsedRange.Cells.Count > 1 Then dr = DestWB.Worksheets("Input").Range("C" & Rows.Count).End(xlUp).Row + 1 lastrow = .Range("C" & Rows.Count).End(xlUp).Row For j = lastrow To startrow Step -1 Select Case .Range("E" & j).Value Case "Manager", "Lead", "Technical", "Analyst" 'do nothing Case Else .Rows(j).EntireRow.Delete End Select Next lastrow = .Range("C" & Rows.Count).End(xlUp).Row If lastrow >= startrow Then .Range("B" & startrow & ":AD" & lastrow).Copy DestWB.Worksheets("Input").Cells(dr, "B").PasteSpecial xlValues .Range("AF" & startrow & ":AQ" & lastrow).Copy DestWB.Worksheets("Input").Cells(dr, "AF").PasteSpecial xlValues .Range("AS" & startrow & ":AS" & lastrow).Copy DestWB.Worksheets("Input").Cells(dr, "AS").PasteSpecial xlValues End If End If End With Exit For End If Next WS WB.Close savechanges:=False Next n End Sub The code works fine except for one issue which I've been trying to solve for the last few weeks. The following line of code looks in column E of the Source file, and if any of the entries match the values shown in the code it copies that row of data to paste into the Destination file. If Range("E" & j) <> "Manager" And Range("E" & j) <> "Lead" And Range("E" & j) <> "Technical" And Range("E" & j) <> "Analyst" Then Rows(j).Delete The problem I have is that if none of these values are found in the Source file, I receive the following error: Run time error '1004': Delete method of range class failed and in Debug mode it highlights this part of the line as the source of the error, but I've no idea why. Rows(j).Delete I just wondered whether someone may be able to look at this please and let me know where I'm going wrong, or perhaps even suggest a more efficient process of allowing the user to merge the workbooks. Many thanks and kind regards

    Read the article

  • JPRT: A Build & Test System

    - by kto
    DRAFT A while back I did a little blogging on a system called JPRT, the hardware used and a summary on my java.net weblog. This is an update on the JPRT system. JPRT ("JDK Putback Reliablity Testing", but ignore what the letters stand for, I change what they mean every day, just to annoy people :\^) is a build and test system for the JDK, or any source base that has been configured for JPRT. As I mentioned in the above blog, JPRT is a major modification to a system called PRT that the HotSpot VM development team has been using for many years, very successfully I might add. Keeping the source base always buildable and reliable is the first step in the 12 steps of dealing with your product quality... or was the 12 steps from Alcoholics Anonymous... oh well, anyway, it's the first of many steps. ;\^) Internally when we make changes to any part of the JDK, there are certain procedures we are required to perform prior to any putback or commit of the changes. The procedures often vary from team to team, depending on many factors, such as whether native code is changed, or if the change could impact other areas of the JDK. But a common requirement is a verification that the source base with the changes (and merged with the very latest source base) will build on many of not all 8 platforms, and a full 'from scratch' build, not an incremental build, which can hide full build problems. The testing needed varies, depending on what has been changed. Anyone that was worked on a project where multiple engineers or groups are submitting changes to a shared source base knows how disruptive a 'bad commit' can be on everyone. How many times have you heard: "So And So made a bunch of changes and now I can't build!". But multiply the number of platforms by 8, and make all the platforms old and antiquated OS versions with bizarre system setup requirements and you have a pretty complicated situation (see http://download.java.net/jdk6/docs/build/README-builds.html). We don't tolerate bad commits, but our enforcement is somewhat lacking, usually it's an 'after the fact' correction. Luckily the Source Code Management system we use (another antique called TeamWare) allows for a tree of repositories and 'bad commits' are usually isolated to a small team. Punishment to date has been pretty drastic, the Queen of Hearts in 'Alice in Wonderland' said 'Off With Their Heads', well trust me, you don't want to be the engineer doing a 'bad commit' to the JDK. With JPRT, hopefully this will become a thing of the past, not that we have had many 'bad commits' to the master source base, in general the teams doing the integrations know how important their jobs are and they rarely make 'bad commits'. So for these JDK integrators, maybe what JPRT does is keep them from chewing their finger nails at night. ;\^) Over the years each of the teams have accumulated sets of machines they use for building, or they use some of the shared machines available to all of us. But the hunt for build machines is just part of the job, or has been. And although the issues with consistency of the build machines hasn't been a horrible problem, often you never know if the Solaris build machine you are using has all the right patches, or if the Linux machine has the right service pack, or if the Windows machine has it's latest updates. Hopefully the JPRT system can solve this problem. When we ship the binary JDK bits, it is SO very important that the build machines are correct, and we know how difficult it is to get them setup. Sure, if you need to debug a JDK problem that only shows up on Windows XP or Solaris 9, you'll still need to hunt down a machine, but not as a regular everyday occurance. I'm a big fan of a regular nightly build and test system, constantly verifying that a source base builds and tests out. There are many examples of automated build/tests, some that trigger on any change to the source base, some that just run every night. Some provide a protection gateway to the 'golden' source base which only gets changes that the nightly process has verified are good. The JPRT (and PRT) system is meant to guard the source base before anything is sent to it, guarding all source bases from the evil developer, well maybe 'evil' isn't the right word, I haven't met many 'evil' developers, more like 'error prone' developers. ;\^) Humm, come to think about it, I may be one from time to time. :\^{ But the point is that by spreading the build up over a set of machines, and getting the turnaround down to under an hour, it becomes realistic to completely build on all platforms and test it, on every putback. We have the technology, we can build and rebuild and rebuild, and it will be better than it was before, ha ha... Anybody remember the Six Million Dollar Man? Man, I gotta get out more often.. Anyway, now the nightly build and test can become a 'fetch the latest JPRT build bits' and start extensive testing (the testing not done by JPRT, or the platforms not tested by JPRT). Is it Open Source? No, not yet. Would you like to be? Let me know. Or is it more important that you have the ability to use such a system for JDK changes? So enough blabbering on about this JPRT system, tell me what you think. And let me know if you want to hear more about it or not. Stay tuned for the next episode, same Bloody Bat time, same Bloody Bat channel. ;\^) -kto

    Read the article

  • Help with InvalidCastException

    - by Robert
    I have a gridview and, when a record is double-clicked, I want it to open up a new detail-view form for that particular record. As an example, I have created a Customer class: using System; using System.Data; using System.Configuration; using System.Collections.Generic; using System.ComponentModel; using System.Data.SqlClient; using System.Collections; namespace SyncTest { #region Customer Collection public class CustomerCollection : BindingListView<Customer> { public CustomerCollection() : base() { } public CustomerCollection(List<Customer> customers) : base(customers) { } public CustomerCollection(DataTable dt) { foreach (DataRow oRow in dt.Rows) { Customer c = new Customer(oRow); this.Add(c); } } } #endregion public class Customer : INotifyPropertyChanged, IEditableObject, IDataErrorInfo { private string _CustomerID; private string _CompanyName; private string _ContactName; private string _ContactTitle; private string _OldCustomerID; private string _OldCompanyName; private string _OldContactName; private string _OldContactTitle; private bool _Editing; private string _Error = string.Empty; private EntityStateEnum _EntityState; private Hashtable _PropErrors = new Hashtable(); public event PropertyChangedEventHandler PropertyChanged; private void FirePropertyChangeNotification(string propName) { if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(propName)); } } public Customer() { this.EntityState = EntityStateEnum.Unchanged; } public Customer(DataRow dr) { //Populates the business object item from a data row this.CustomerID = dr["CustomerID"].ToString(); this.CompanyName = dr["CompanyName"].ToString(); this.ContactName = dr["ContactName"].ToString(); this.ContactTitle = dr["ContactTitle"].ToString(); this.EntityState = EntityStateEnum.Unchanged; } public string CustomerID { get { return _CustomerID; } set { _CustomerID = value; FirePropertyChangeNotification("CustomerID"); } } public string CompanyName { get { return _CompanyName; } set { _CompanyName = value; FirePropertyChangeNotification("CompanyName"); } } public string ContactName { get { return _ContactName; } set { _ContactName = value; FirePropertyChangeNotification("ContactName"); } } public string ContactTitle { get { return _ContactTitle; } set { _ContactTitle = value; FirePropertyChangeNotification("ContactTitle"); } } public Boolean IsDirty { get { return ((this.EntityState != EntityStateEnum.Unchanged) || (this.EntityState != EntityStateEnum.Deleted)); } } public enum EntityStateEnum { Unchanged, Added, Deleted, Modified } void IEditableObject.BeginEdit() { if (!_Editing) { _OldCustomerID = _CustomerID; _OldCompanyName = _CompanyName; _OldContactName = _ContactName; _OldContactTitle = _ContactTitle; } this.EntityState = EntityStateEnum.Modified; _Editing = true; } void IEditableObject.CancelEdit() { if (_Editing) { _CustomerID = _OldCustomerID; _CompanyName = _OldCompanyName; _ContactName = _OldContactName; _ContactTitle = _OldContactTitle; } this.EntityState = EntityStateEnum.Unchanged; _Editing = false; } void IEditableObject.EndEdit() { _Editing = false; } public EntityStateEnum EntityState { get { return _EntityState; } set { _EntityState = value; } } string IDataErrorInfo.Error { get { return _Error; } } string IDataErrorInfo.this[string columnName] { get { return (string)_PropErrors[columnName]; } } private void DataStateChanged(EntityStateEnum dataState, string propertyName) { //Raise the event if (PropertyChanged != null && propertyName != null) { PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); } //If the state is deleted, mark it as deleted if (dataState == EntityStateEnum.Deleted) { this.EntityState = dataState; } if (this.EntityState == EntityStateEnum.Unchanged) { this.EntityState = dataState; } } } } Here is my the code for the double-click event: private void customersDataGridView_CellDoubleClick(object sender, DataGridViewCellEventArgs e) { Customer oCustomer = (Customer)customersBindingSource.CurrencyManager.List[customersBindingSource.CurrencyManager.Position]; CustomerForm oForm = new CustomerForm(); oForm.NewCustomer = oCustomer; oForm.ShowDialog(this); oForm.Dispose(); oForm = null; } Unfortunately, when this code runs, I receive an InvalidCastException error stating "Unable to cast object to type 'System.Data.DataRowView' to type 'SyncTest.Customer'". This error occurs on the very first line of that event: Customer oCustomer = (Customer)customersBindingSource.CurrencyManager.List[customersBindingSource.CurrencyManager.Position]; What am I doing wrong?... and what can I do to fix this? Any help is greatly appreciated. Thanks!

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >