Search Results

Search found 988 results on 40 pages for 'hacker pk'.

Page 37/40 | < Previous Page | 33 34 35 36 37 38 39 40  | Next Page >

  • MYSQL inserting records form table A into tables B and C (linked by foreign key) depending on column values in table A

    - by Chez
    Hi All, Have been searching high and low for a simple solution to a mysql insert problem. The problem is as follows: I am putting together an organisational database consisting of departments and desks. A department may or may not have n number of desks. Both departments and desks have their own table linked by a foreign key in desks to the relevant record in departments (i.e. the pk). I have a temporary table which I use to place all new department data (n records long)...In this table n number of desk records for a department follow the department record directly below. In the TEMP table, if a column department_name has a value,it is a department, if it doesn't it will have a value for the column desk and therefore will be a desk which is related to the above department. As I said there maybe several desk records until you get to the next department record. Ok, so what I want to do is the following: Insert the departments into the departments table and its desks into the desks table , generating a foreign key in the desk record to the relevant departments id. In pseudo-ish code: for each record in TEMP table if Department INSERT the record into Departments get the id of the newly created Department record and store it somewhere else if Desk INSERT the desk into the desks table with the relevant departments id as the foreignkey note once again that all departments desks directly follow the department in the TEMP Table Many Thanks

    Read the article

  • performance issue: difference between select s.* vs select *

    - by kamil
    Recently I had some problem in performance of my query. The thing is described here: poor Hibernate select performance comparing to running directly - how debug? After long time of struggling, I've finally discovered that the query with select prefix like: select sth.* from Something as sth... Is 300x times slower then query started this way: select * from Something as sth.. Could somebody help me, and asnwer why is that so? Some external documents on this would be really useful. The table used for testing was: SALES_UNIT table contains some basic info abot sales unit node such as name and etc. The only association is to table SALES_UNIT_TYPE, as ManyToOne. The primary key is ID and field VALID_FROM_DTTM which is date. SALES_UNIT_RELATION contains relation PARENT-CHILD between sales unit nodes. Consists of SALES_UNIT_PARENT_ID, SALES_UNIT_CHILD_ID and VALID_TO_DTTM/VALID_FROM_DTTM. No association with any tables. The PK here is ..PARENT_ID, ..CHILD_ID and VALID_FROM_DTTM The actual query I've done was: select s.* from sales_unit s left join sales_unit_relation r on (s.sales_unit_id = r.sales_unit_child_id) where r.sales_unit_child_id is null select * from sales_unit s left join sales_unit_relation r on (s.sales_unit_id = r.sales_unit_child_id) where r.sales_unit_child_id is null Same query, both uses left join and only difference is with select.

    Read the article

  • Joining tables with composite keys in a legacy system in hibernate

    - by Steve N
    Hi, I'm currently trying to create a pair of Hibernate annotated classes to load (read only) from a pair of tables in a legacy system. The legacy system uses a consistent (if somewhat dated) approach to keying tables. The tables I'm attempting to map are as follows: Customer CustomerAddress -------------------------- ---------------------------- customerNumber:string (pk) customerNumber:string (pk_1) name:string sequenceNumber:int (pk_2) street:string postalCode:string I've approached this by creating a CustomerAddress class like this: @Entity @Table(name="CustomerAddress") @IdClass(CustomerAddressKey.class) public class CustomerAddress { @Id @AttributeOverrides({ @AttributeOverride(name = "customerNumber", column = @Column(name="customerNumber")), @AttributeOverride(name = "sequenceNumber", column = @Column(name="sequenceNumber")) }) private String customerNumber; private int sequenceNumber; private String name; private String postalCode; ... } Where the CustomerAddressKey class is a simple Serializable object with the two key fields. The Customer object is then defined as: @Entity @Table(name = "Customer") public class Customer { private String customerNumber; private List<CustomerAddress> addresses = new ArrayList<CustomerAddress>(); private String name; ... } So, my question is: how do I express the OneToMany relationship on the Customer table?

    Read the article

  • Common Properties: Consolidating Loan, Purchase, Inventory and Sale tables into one Transaction tabl

    - by Frank Computer
    Pawnshop Application: I have separate tables for Loan, Purchase, Inventory & Sales transactions. Each tables rows are joined to their respective customer rows by: customer.pk [serial] = loan.fk [integer]; = purchase.fk [integer]; = inventory.fk [integer]; = sale.fk [integer]; Since there are so many common properties within the four tables, I consolidated the four tables into one table called "transaction", where a column: transaction.trx_type char(1) {L=Loan, P=Purchase, I=Inventory, S=Sale} Scenario: A customer initially pawns merchandise, makes a couple of interest payments, then decides he wants to sell the merchandise to the pawnshop, who then places merchandise in Inventory and eventually sells it to another customer. I designed a generic transaction table where for example: transaction.main_amount DECIMAL(7,2) in a loan transaction holds the pawn amount, in a purchase holds the purchase price, in inventory and sale holds sale price. This is clearly a denormalized design, but has made programming alot easier and improved performance. Any type of transaction can now be performed from within one screen, without the need to change to different tables.

    Read the article

  • Hibernate 1:M relationship ,row order, constant values table and concurrency

    - by EugeneP
    table A and B need to have 1:M relationship a and b are added during application runtime, so A created, then say 4 B's created. Each B instance has to come in order, so that I could later extract them in the same order as I added them. The app will be a web-app running on Tomcat, so 10 instances may work simultaneously. So my question are: 1) How to preserve inserting order, so that I could extract B instances that A references in the same order as I persisted them. That's tricky, because we add to a Collection and then it gets saved (am I right?). So, it depends on how Hibernate saves it, what if it changes the order in what we added instances? I've seen something like LIST instead of SET when describing relationships, is that what I need? 2) How to add a 3-rd column to B so that I could differentiate the instances, something like SEX(M,F,U) in B table. Do I need a special table, or there's and easy way to describe constants in Hibernate. What do you recommend? 3) Talking about concurrency, what methods do you recommend to use? There should be no collisions in the db and as you see, there might easily be some if rows are not inserted (PK added) right where it is invoked without delays ?

    Read the article

  • Data base design with Blob

    - by mmuthu
    Hi, I have a situation where i need to store the binary data into database as blob column. There are three different table exists in my database where in i need to store a blob data for each record. Not every record will have the blob data all the time. It is time and user based. The table one will have to store the *.doc files almost for all the record The table two will have to store the *.xml optionally. The table three will have to store images (not sure what is frequency, etc) Now my questions is whether it is a good idea to maintain a separate table to store the blob data pointing it to the respective table PK's (Yes, there will be no FK's and assuming program will maintain it). It will be some thing like below, BLOB|PK_ID|TABLE_NAME Alternatively, is it a good idea to keep the blob column in respective tables. As for as my application runtime is concerned, The table 2 will be read very frequently. Though the blob column will not be required. The table 2 record will gets deleted frequently. Similarly other blob data in respective table will not be accessed frequently. All of the blob content will be read on-demand basis. I'm thinking first approach will work better for me. What do you guys think? Btw, I'm using Oracle.

    Read the article

  • Create an index only on certain rows in mysql

    - by dhruvbird
    So, I have this funny requirement of creating an index on a table only on a certain set of rows. This is what my table looks like: USER: userid, friendid, created, blah0, blah1, ..., blahN Now, I'd like to create an index on: (userid, friendid, created) but only on those rows where userid = friendid. The reason being that this index is only going to be used to satisfy queries where the WHERE clause contains "userid = friendid". There will be many rows where this is NOT the case, and I really don't want to waste all that extra space on the index. Another option would be to create a table (query table) which is populated on insert/update of this table and create a trigger to do so, but again I am guessing an index on that table would mean that the data would be stored twice. How does mysql store Primary Keys? I mean is the table ordered on the Primary Key or is it ordered by insert order and the PK is like a normal unique index? I checked up on clustered indexes (http://dev.mysql.com/doc/refman/5.0/en/innodb-index-types.html), but it seems only InnoDB supports them. I am using MyISAM (I mention this because then I could have created a clustered index on these 3 fields in the query table). I am basically looking for something like this: ALTER TABLE USERS ADD INDEX (userid, friendid, created) WHERE userid=friendid

    Read the article

  • Foreign-key-like merge in R

    - by skyl
    I'm merging a bunch of csv with 1 row per id/pk/seqn. > full = merge(demo, lab13am, by="seqn", all=TRUE) > full = merge(full, cdq, by="seqn", all=TRUE) > full = merge(full, mcq, by="seqn", all=TRUE) > full = merge(full, cfq, by="seqn", all=TRUE) > full = merge(full, diq, by="seqn", all=TRUE) > print(length(full$ridageyr)) [1] 9965 > print(summary(full$ridageyr)) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.00 11.00 19.00 29.73 48.00 85.00 Everything is great. But, I have another file which has multiple rows per id like: "seqn","rxd030","rxd240b","nhcode","rxq250" 56,2,"","",NA,NA,"" 57,1,"ACETAMINOPHEN","01200",2 57,1,"BUDESONIDE","08800",1 58,1,"99999","",NA 57 has two rows. So, if I naively try to merge this file, I have a ton more rows and my data gets all skewed up. > full = merge(full, rxq, by="seqn", all=TRUE) > print(length(full$ridageyr)) [1] 15643 > print(summary(full$ridageyr)) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.00 14.00 41.00 40.28 66.00 85.00 Is there a normal idiomatic way to deal with data like this? Suppose I want a way to make a simple model like MYSPECIAL_FACTOR <- somehow() glm(MYSPECIAL_FACTOR ~ full$ridageyr, family=binomial) where MYSPECIAL_FACTOR is, say, whether or not rxd240b == "ACETAMINOPHEN" for the observations which are unique by seqn. You can reproduce by running the first bit of this.

    Read the article

  • Book Review Charlene Li's New Book: Open Leadership

    - by david.talamelli
    A few weeks ago, I was surprised when I looked in our mail box. I had received an Advance Copy of Charlene Li's new book titled "Open Leadership: How Social Technology Can Transform the Way You Lead". Charlene sent a tweet a while back asking anyone interested in receiving the book to submit their details. I sent off my details and didn't think I would hear anything back, so it was a pleasant surprise. With that I almost feel bad that it has taken me 3 weeks to read her book. It took this long mainly because it has been hard to fit in some quality reading time for myself with work, the kids, volunteering, etc..... I am happy to report I have finished her book and wanted to run through my initial thoughts with you. I first came across Charlene Li after reading her book "Groundswell" a few years ago, her latest book "Open Leadership" is a follow on from Groundswell and to me it seems like a natural progression from the question "Ok the business landscape is changing, what do we do now?" For me these two books have a different writing style to them. Groundswell from memory spoke about broad social media concepts and adoption and alerted us to some of the changes taking place in the SM landscape. Open Leadership seems to be focussed on taking those broad concepts and finding ways to implement them into your environment. That is breaking broad concepts down into individual action items that can be measured and analysed. As the business world changes Leaders must change their approach and let go of control to more control. One of the things I love reading about is seeing real life examples of how people and organisations are making these things happen. In this book Charlene has collected some great collateral and case studies from companies such as Cisco, Best Buy, The Red Cross and The State Bank of India (as a side-note, I wish now that I submitted my input for the Leaders I work with here at Oracle - there are some great examples here of people who empower their staff). As society becomes more adept at using social media it is inevitable that Leaders must become open with their employees, clients and partners. From the book some of the key points I took away are (I actually took away a lot more from this book, this is just an overview) : 1) Organisations should encourage risk taking. Without being a "hacker", how can we improve ourselves, our processes, our business, etc... The old saying you only fail by not trying applies here. If Leaders create a culture where people are afraid to stick their neck out - how will you innovate? 2) Leaders need to lead by example - if you want to promote an open and transparent business, a Leader needs to exemplify the traits they would like to see out of their employees. 3) The definition of a Leader is changing, open leadership is about being a catalyst to change that uses networks to spread a vision as opposed to traditional leadership that is viewed as a role. 4) There is a cultural and business shift taking place. Information is more wide-spread and is being disseminated faster than any other time in the past. Leaders who are open and transparent will thrive in this new business environment. 5) Leadership is not defined by a title - it is defined by a person's actions. Also anyone can be a Leader or has Leadership potential in them- it is a matter of drawing that out of people. I found this book useful and I also found myself looking at my own actions and the actions of others around me (including my management) to see how open and transparent I am in my work. For me I am glad I read this book as it validated my own thoughts of the changes we are seeing take place. This book has certainly given me some new ideas and helped me push my own boundaries of what I can do. The book has a number of action plans at the end of some of the chapters such as "Conducting you Openness Audit" that I think have helped me take thoughts and ideas and turn them into concrete action items. I have included a link to the introduction of the book here if anyone wants to have a read of it. If anyone else has read this book, it would be great to hear your thoughts/comments/review. Leave your comments below. This article was originally posted on David Talamelli's Blog - David's Journal on Tap

    Read the article

  • SQL Developer Data Modeler v3.3 Early Adopter: Collaborative Design via Excel?

    - by thatjeffsmith
    As you may have heard last week, we have a new version of Oracle SQL Developer Data Modeler now available as an Early Adopter release. Version 3.3 has quite a few new features and I’ll be previewing them here. Today’s topic is our new Excel integration. It builds off of last week’s lesson: Search, so you may want to go read that first. They say it takes a village to raise a child. I say it takes a team to build a data model. You have your techie folks, your business folks, your in-betweeners, and your database geeks. Who gets to define how customers are represented and stored in your database? That data lives forever, so you better get it right from the beginning, or you’ll be living in a hacker’s paradise for years to come. Lots of good rantings, ravings, and advice on this topic in general on Karen Lopez’s (@datachick) blog. But let’s say you are the primary modeler on a project. You dutifully interview the business folks for their requirements. You sit down and start to model and think you’re pretty close. Now you need someone to confirm your assumptions and provide some feedback. Do you send your model over? Take a screenshot and blow it up on a whiteboard? Export to HTML and let them take a magic marker to their monitors? Or maybe you bite the bullet and install your modeling software on their desktops and take the hours or days required to train them up on how to use the the tool. Wouldn’t it be nice if they could just mark up their corrections in Excel and let you suck the updates back in? This is what we have started to build in Oracle SQL Developer Data Modeler. Let’s say you have a new table called ‘UT_STARTUPS.’ It looks a little something like this: A table in Oracle SQL Developer Data Modeler What I would like to do is have my team or co-worker review how I have defined those columns. Perhaps TIMESTAMP is overkill or maybe the column names themselves aren’t up to snuff. What I am going to do is now search for all the columns in my table, then export that to Excel. So do a search for UT_STARTUPS. Search, filter, then Report With the filter set to ‘Columns,’ if I do a report I’ll be only getting the columns that are resolving to my search term. So as long as my table name is unique in the model, I should get what I’m looking for. Here’s what I see when I click on the Report button: XLS or XLSX, either format is just fine I want to decide how the Column data is exported to Excel though, so I’m going to create a report template that I can use going forward. So click the ‘Manage’ button and setup a new template. I’m going to call mine ‘CollaborativeDevelopment.’ The templates allow me to define what properties are included in the reports. Once this is set, I’ll have the XLS file generated, and get to work Now let the Excel junkies do their stuff Note that not ALL of the report properties are update-able (yes, I made up a new word there) via Excel. We’ll have the full list of properties documented going forward, but in my Excel sheet, note that I can’t change the table name or the data types for the columns. I’m going to update some column names and supply ‘nice’ comments so the database users know what’s what. Here’s my input for the designer/architect/database dude: Be kind, please rew…use comments. Save the file, email it back to your modeler. Update the model from Excel That’s right, it’s a right mouse click from your model in the tree If everything goes right, you’ll see a nice confirmation message: It’s alive! Another to-do item on tap – making this dialog more informative. We’ll be showing exactly what in your model was updated from Excel. Let’s take another look at the model now Voila! Why are we doing this again? The goal is to reduce the number of round-trips from the modeler and the business process owner. One is used to working with Excel – why not allow them to mark up their changes in the tool they already know? This is an early adopter release and I anticipate this feature getting a good bit of tuning up before we release. Why don’t you download 3.3, give it a whirl, and let us know what you think?

    Read the article

  • MySQL Cluster 7.3: On-Demand Webinar and Q&A Available

    - by Mat Keep
    The on-demand webinar for the MySQL Cluster 7.3 Development Release is now available. You can learn more about the design, implementation and getting started with all of the new MySQL Cluster 7.3 features from the comfort and convenience of your own device, including: - Foreign Key constraints in MySQL Cluster - Node.js NoSQL API  - Auto-installation of higher performance distributed, clusters We received some great questions over the course of the webinar, and I wanted to share those for the benefit of a broader audience. Q. What Foreign Key actions are supported: A. The core referential actions defined in the SQL:2003 standard are implemented: CASCADE RESTRICT NO ACTION SET NULL Q. Where are Foreign Keys implemented, ie data nodes or SQL nodes? A. They are implemented in the data nodes, therefore can be enforced for both the SQL and NoSQL APIs Q. Are they compatible with the InnoDB Foreign Key implementation? A. Yes, with the following exceptions: - InnoDB doesn’t support “No Action” constraints, MySQL Cluster does - You can choose to suspend FK constraint enforcement with InnoDB using the FOREIGN_KEY_CHECKS parameter; at the moment, MySQL Cluster ignores that parameter. - You cannot set up FKs between 2 tables where one is stored using MySQL Cluster and the other InnoDB. - You cannot change primary keys through the NDB API which means that the MySQL Server actually has to simulate such operations by deleting and re-adding the row. If the PK in the parent table has a FK constraint on it then this causes non-ideal behaviour. With Restrict or No Action constraints, the change will result in an error. With Cascaded constraints, you’d want the rows in the child table to be updated with the new FK value but, the implicit delete of the row from the parent table would remove the associated rows from the child table and the subsequent implicit insert into the parent wouldn’t reinstate the child rows. For this reason, an attempt to add an ON UPDATE CASCADE where the parent column is a primary key will be rejected. Q. Does adding or dropping Foreign Keys cause downtime due to a schema change? A. Nope, this is an online operation. MySQL Cluster supports a number of on-line schema changes, ie adding and dropping indexes, adding columns, etc. Q. Where can I see an example of node.js with MySQL Cluster? A. Check out the tutorial and download the code from GitHub Q. Can I use the auto-installer to support remote deployments? How about setting up MySQL Cluster 7.2? A. Yes to both! Q. Can I get a demo Check out the tutorial. You can download the code from http://labs.mysql.com/ Go to Select Build drop-down box Q. What is be minimum internet speen required for Geo distributed cluster with synchronous replication? A. if you're splitting you cluster between sites then we recommend a network latency of 20ms or less. Alternatively, use MySQL asynchronous replication where the latency of your WAN doesn't impact the latency of your reads/writes. Q. Where you can one learn more about the PayPal project with MySQL Cluster? A. Take a look at the following - you'll find press coverage, a video and slides from their keynote presentation  So, if you want to learn more, listen to the new MySQL Cluster 7.3 on-demand webinar  MySQL Cluster 7.3 is still in the development phase, so it would be great to get your feedback on these new features, and things you want to see!

    Read the article

  • Fun With the Chrome JavaScript Console and the Pluralsight Website

    - by Steve Michelotti
    Originally posted on: http://geekswithblogs.net/michelotti/archive/2013/07/24/fun-with-the-chrome-javascript-console-and-the-pluralsight-website.aspxI’m currently working on my third course for Pluralsight. Everyone already knows that Scott Allen is a “dominating force” for Pluralsight but I was curious how many courses other authors have published as well. The Pluralsight Authors page - http://pluralsight.com/training/Authors – shows all 146 authors and you can click on any author’s page to see how many (and which) courses they have authored. The problem is: I don’t want to have to click into 146 pages to get a count for each author. With this in mind, I figured I could write a little JavaScript using the Chrome JavaScript console to do some “detective work.” My first step was to figure out how the HTML was structured on this page so I could do some screen-scraping. Right-click the first author - “Inspect Element”. I can see there is a primary <div> with a class of “main” which contains all the authors. Each author is in an <h3> with an <a> tag containing their name and link to their page:     This web page already has jQuery loaded so I can use $ directly from the console. This allows me to just use jQuery to inspect items on the current page. Notice this is a multi-line command. In order to use multiple lines in the console you have to press SHIFT-ENTER to go to the next line:     Now I can see I’m extracting data just fine. At this point I want to follow each URL. Then I want to screen-scrape this next page to see how many courses each author has done. Let’s take a look at the author detail page:       I can see we have a table (with a css class of “course”) that contains rows for each course authored. This means I can get the number of courses pretty easily like this:     Now I can put this all together. Back on the authors page, I want to follow each URL, extract the returned HTML, and grab the count. In the code below, I simply use the jQuery $.get() method to get the author detail page and the “data” variable that is in the callback contains the HTML. A nice feature of jQuery is that I can simply put this HTML string inside of $() and I can use jQuery selectors directly on it in conjunction with the find() method:     Now I’m getting somewhere. I have every Pluralsight author and how many courses each one has authored. But that’s not quite what I’m after – what I want to see are the authors that have the MOST courses in the library. What I’d like to do is to put all of the data in an array and then sort that array descending by number of courses. I can add an item to the array after each author detail page is returned but the catch here is that I can’t perform the sort operation until ALL of the author detail pages have executed. The jQuery $.get() method is naturally an async method so I essentially have 146 async calls and I don’t want to perform my sort action until ALL have completed (side note: don’t run this script too many times or the Pluralsight servers might think your an evil hacker attempting a DoS attack and deny you). My C# brain wants to use a WaitHandle WaitAll() method here but this is JavaScript. I was able to do this by using the jQuery Deferred() object. I create a new deferred object for each request and push it onto a deferred array. After each request is complete, I signal completion by calling the resolve() method. Finally, I use a $.when.apply() method to execute my descending sort operation once all requests are complete. Here is my complete console command: 1: var authorList = [], 2: defList = []; 3: $(".main h3 a").each(function() { 4: var def = $.Deferred(); 5: defList.push(def); 6: var authorName = $(this).text(); 7: var authorUrl = $(this).attr('href'); 8: $.get(authorUrl, function(data) { 9: var courseCount = $(data).find("table.course tbody tr").length; 10: authorList.push({ name: authorName, numberOfCourses: courseCount }); 11: def.resolve(); 12: }); 13: }); 14: $.when.apply($, defList).then(function() { 15: console.log("*Everything* is complete"); 16: var sortedList = authorList.sort(function(obj1, obj2) { 17: return obj2.numberOfCourses - obj1.numberOfCourses; 18: }); 19: for (var i = 0; i < sortedList.length; i++) { 20: console.log(authorList[i]); 21: } 22: });   And here are the results:     WOW! John Sonmez has 44 courses!! And Matt Milner has 29! I guess Scott Allen isn’t the only “dominating force”. I would have assumed Scott Allen was #1 but he comes in as #3 in total course count (of course Scott has 11 courses in the Top 50, and 14 in the Top 100 which is incredible!). Given that I’m in the middle of producing only my third course, I better get to work!

    Read the article

  • Adding a DLL to the GAC in Windows 7

    - by Jim Giercyk
    I recently created a DLL and I wanted to reference it from a project I was developing in Visual Studio.  In previous versions of Windows, doing so was simply a matter of dropping the DLL file in the C:\Windows\assembly folder.  That would add the DLL to the Global Assembly Cache (GAC) and make it accessible in Visual Studio.  However, as is often the case, Window 7 is different.  Even if you have Administrator privileges on your machine, you still do not have permission to drop a file in the assembly folder.  Undaunted, I thought about using the old DOS command line utility gacutil.exe.  Microsoft developed the tool as part of the .Net framework, and it is available in the Windows SDK Framework Tools.  If you have never used gacutil.exe before, you can find out everything you ever wanted to know but were afraid to ask here: http://msdn.microsoft.com/en-us/library/ex0ss12c(v=vs.80).aspx .  Unfortunately, if you do not have the Windows SDK loaded on your development machine, you will need to install it to use gacutil, but it is relatively quick and painless, and the framework tools are very useful.  Look here for your latest SDK: http://www.microsoft.com/download/en/search.aspx?q=Windows%20SDK .   After installing the SDK, I tried installing my DLL to the GAC by running gacutil from a DOS command line: That’s odd.  Microsoft is shipping a tool that cannot be executed even with Administrator rights?  Let me stop here and say that I am by no means a Windows security expert, so I actually did contact my system administrators, and they were not sure how to fix the problem….there must be a super administrator access level, but it isn’t available to your average developer in my company.  The solution outlined here is working within the boundaries of a normal windows Administrator. So, now the hacker in me bubbles to the surface.  What if I were to create a simple BAT file containing the gacutil command?  It’s so crazy it just might work!  Ugh!  I was starting to think this would never work, but then I realized that simply executing a batch program did not change my level of access.  Typically in Windows 7, you would select the “Run As Administrator” option to temporarily act as an administrator for the purpose of executing a process.  However, that option is not available for BAT files run from the command line.  SOLUTION: Create a desktop shortcut to execute the BAT file, which in turn will execute the line command…..are you still with me?  I created a shortcut and pointed it to my batch file.  Theoretically, all I need to do now is right-click on the shortcut and select “Run As Administrator” and we’re good, right?  Well, kinda.  If you notice the syntax of my BAT file, the name of the DLL is passed in as a parameter.  Therefore, I either have to hard-code the file name in the BAT program (YUCK!!), or I can leave the parameter and drag the DLL file to the shortcut and drop it.  Sweet, drag-and-drop works for me…..but if I use the drag-and-drop method, there is no way for me to right-click and select “Run As Administrator”.  That is not a problem…..I simply have to adjust the properties of the shortcut I created and I am in business.  I Right-clicked on the shortcut and select “Properties”.  Under the “Shortcut” tab there is an “Advanced” button…..I clicked it. All I needed to do was check the “Run As Administrator” box: In summary, what I have done is create a BAT file to execute a command line utility, gacutil.exe.  Then, rather than executing the BAT file from the command line, I created a desktop shortcut to run it and set the shortcut properties to “Run As Administrator”.  This will effectively mean I am executing the command line utility with Administrator privileges.  Pretty sneaky. Now, when I drag the DLL file  over to the shortcut, it starts the BAT file and adds the DLL to the assembly cache.  I created another BAT file to remove a DLL from the GAC in case the need should arise.  The code for that is: Give it a try.  I can’t imagine why updating the GAC has been made into such a chore in Windows 7.  Hopefully there is a service pack in the works that will give developers the functionality they had in Windows XP, but in the meantime, this workaround is extremely useful.

    Read the article

  • My Red Gate Experience

    - by Colin Rothwell
    I’m Colin, and I’ve been an intern working with Mike in publishing on Simple-Talk and SQLServerCentral for the past ten weeks. I’ve mostly been working “behind the scenes”, making improvements to the spam filtering, along with various other small tweaks. When I arrived at Red Gate, one of the first things Mike asked me was what I wanted to get out of the internship. It wasn’t a question I’d given a great deal of thought to, but my immediate response was the same as almost anybody: to support my growing family. Well, ok, not quite that, but money was certainly a motivator, along with simply making sure that I didn’t get bored over the summer. Three months is a long time to fill, and many of my friends end up getting bored, or worse, knitting obsessively. With the arrogance which seems fairly common among Cambridge people, I wasn’t expecting to really learn much here! In my mind, the part of the year where I am at Uni is the part where I learn things, whilst Red Gate would be an opportunity to apply what I’d learnt. Thankfully, the opposite is true: I’ve learnt a lot during my time here, and there has been a definite positive impact on the way I write code. The first thing I’ve really learnt is that test-driven development is, in general, a sensible way of working. Before coming, I didn’t really get it: how could you test something you hadn’t yet written? It didn’t make sense! My problem was seeing a test as having to test all the behaviour of a given function. Writing tests which test the bare minimum possible and building them up is a really good way of crystallising the direction the code needs to grow in, and ensures you never attempt to write too much code at time. One really good experience of this was early on in my internship when Mike and I were working on the query used to list active authors: I’d written something which I thought would do the trick, but by starting again using TDD we grew something which revealed that there were several subtle mistakes in the query I’d written. I’ve also been awakened to the value of pair programming. Whilst I could sort of see the point before coming, I also thought that it was impossible that two people would ever get more done at the same computer than if they were working separately. I still think that this is true for projects with pieces that developers can easily work on independently, and with developers who both know the codebase, but I’ve found that pair programming can be really good for learning a code base, and for building up small projects to the point where you can start working on separate components, as well as solving particularly difficult problems. Later on in my internship, for my down tools week project, I was working on adding Python support to Glimpse. Another intern and I we pair programmed the entire project, using ping pong pair programming as much as possible. One bonus that this brought which I wasn’t expecting was that I found myself less prone to distraction: with someone else peering over my shoulder, I didn’t have the ever-present temptation to open gmail, or facebook, or yammer, or twitter, or hacker news, or reddit, and so on, and so forth. I’m quite proud of this project: I think it’s some of the best code I’ve written. I’ve also been really won over to the value of descriptive variables names. In my pre-Red Gate life, as a lone-ranger style cowboy programmer, I’d developed a tendency towards laziness in variable names, sometimes abbreviating or, worse, using acronyms. I’ve swiftly realised that this is a bad idea when working with a team: saving a few key strokes is inevitably not worth it when it comes to reading code again in the future. Longer names also mean you can do away with a majority of comments. I appreciate that if you’ve come up with an O(n*log n) algorithm for something which seemed O(n^2), you probably want to explain how it works, but explaining what a variable name means is a big no no: it’s so very easy to change the behaviour of the code, whilst forgetting about the comments. Whilst at Red Gate, I took the opportunity to attend a code retreat, which really helped me to solidify all the things I’d learnt. To be completely free of any existing code base really lets you focus on best practises and think about how you write code. If you get a chance to go on a similar event, I’d highly recommend it! Cycling to Red Gate, I’ve also become much better at fitting inner tubes: if you’re struggling to get the tube out, or re-fit the tire, letting a bit of air out usually helps. I’ve also become quite a bit better at foosball and will miss having a foosball table! I’d like to finish off by saying thank you to everyone at Red Gate for having me. I’ve really enjoyed working with, and learning from, the team that brings you this web site. If you meet any of them, buy them a drink!

    Read the article

  • How Can I Safely Destroy Sensitive Data CDs/DVDs?

    - by Jason Fitzpatrick
    You have a pile of DVDs with sensitive information on them and you need to safely and effectively dispose of them so no data recovery is possible. What’s the most safe and efficient way to get the job done? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader HaLaBi wants to know how he can safely destroy CDs and DVDs with personal data on them: I have old CDs/DVDs which have some backups, these backups have some work and personal files. I always had problems when I needed to physically destroy them to make sure no one will reuse them. Breaking them is dangerous, pieces could fly fast and may cause harm. Scratching them badly is what I always do but it takes long time and I managed to read some of the data in the scratched CDs/DVDs. What’s the way to physically destroy a CD/DVD safely? How should he approach the problem? The Answer SuperUser contributor Journeyman Geek offers a practical solution coupled with a slightly mad-scientist solution: The proper way is to get yourself a shredder that also handles cds – look online for cd shredders. This is the right option if you end up doing this routinely. I don’t do this very often – For small scale destruction I favour a pair of tin snips – they have enough force to cut through a cd, yet are blunt enough to cause small cracks along the sheer line. Kitchen shears with one serrated side work well too. You want to damage the data layer along with shearing along the plastic, and these work magnificently. Do it in a bag, cause this generates sparkly bits. There’s also the fun, and probably dangerous way – find yourself an old microwave, and microwave them. I would suggest doing this in a well ventilated area of course, and not using your mother’s good microwave. There’s a lot of videos of this on YouTube – such as this (who’s done this in a kitchen… and using his mom’s microwave). This results in a very much destroyed cd in every respect. If I was an evil hacker mastermind, this is what I’d do. The other options are better for the rest of us. Another contributor, Keltari, notes that the only safe (and DoD approved) way to dispose of data is total destruction: The answer by Journeyman Geek is good enough for almost everything. But oddly, that common phrase “Good enough for government work” does not apply – depending on which part of the government. It is technically possible to recover data from shredded/broken/etc CDs and DVDs. If you have a microscope handy, put the disc in it and you can see the pits. The disc can be reassembled and the data can be reconstructed — minus the data that was physically destroyed. So why not just pulverize the disc into dust? Or burn it to a crisp? While technically, that would completely eliminate the data, it leaves no record of the disc having existed. And in some places, like DoD and other secure facilities, the data needs to be destroyed, but the disc needs to exist. If there is a security audit, the disc can be pulled to show it has been destroyed. So how can a disc exist, yet be destroyed? Well, the most common method is grinding the disc down to destroy the data, yet keep the label surface of the disc intact. Basically, it’s no different than using sandpaper on the writable side, till the data is gone. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • django/python: is one view that handles two sibling models a good idea?

    - by clime
    I am using django multi-table inheritance: Video and Image are models derived from Media. I have implemented two views: video_list and image_list, which are just proxies to media_list. media_list returns images or videos (based on input parameter model) for a certain object, which can be of type Event, Member, or Crag. The view alters its behaviour based on input parameter action (better name would be mode), which can be of value "edit" or "view". The problem is that I need to ask whether the input parameter model contains Video or Image in media_list so that I can do the right thing. Similar condition is also in helper method media_edit_list that is called from the view. I don't particularly like it but the only alternative I can think of is to have separate (but almost the same) logic for video_list and image_list and then probably also separate helper methods for videos and images: video_edit_list, image_edit_list, video_view_list, image_view_list. So four functions instead of just two. That I like even less because the video functions would be very similar to the respective image functions. What do you recommend? Here is extract of relevant parts: http://pastebin.com/07t4bdza. I'll also paste the code here: #urls url(r'^media/images/(?P<rel_model_tag>(event|member|crag))/(?P<rel_object_id>\d+)/(?P<action>(view|edit))/$', views.image_list, name='image-list') url(r'^media/videos/(?P<rel_model_tag>(event|member|crag))/(?P<rel_object_id>\d+)/(?P<action>(view|edit))/$', views.video_list, name='video-list') #views def image_list(request, rel_model_tag, rel_object_id, mode): return media_list(request, Image, rel_model_tag, rel_object_id, mode) def video_list(request, rel_model_tag, rel_object_id, mode): return media_list(request, Video, rel_model_tag, rel_object_id, mode) def media_list(request, model, rel_model_tag, rel_object_id, mode): rel_model = tag_to_model(rel_model_tag) rel_object = get_object_or_404(rel_model, pk=rel_object_id) if model == Image: star_media = rel_object.star_image else: star_media = rel_object.star_video filter_params = {} if rel_model == Event: filter_params['event'] = rel_object_id elif rel_model == Member: filter_params['members'] = rel_object_id elif rel_model == Crag: filter_params['crag'] = rel_object_id media_list = model.objects.filter(~Q(id=star_media.id)).filter(**filter_params).order_by('date_added').all() context = { 'media_list': media_list, 'star_media': star_media, } if mode == 'edit': return media_edit_list(request, model, rel_model_tag, rel_object_id, context) return media_view_list(request, model, rel_model_tag, rel_object_id, context) def media_view_list(request, model, rel_model_tag, rel_object_id, context): if request.is_ajax(): context['base_template'] = 'boxes/base-lite.html' return render(request, 'media/list-items.html', context) def media_edit_list(request, model, rel_model_tag, rel_object_id, context): if model == Image: get_media_edit_record = get_image_edit_record else: get_media_edit_record = get_video_edit_record media_list = [get_media_edit_record(media, rel_model_tag, rel_object_id) for media in context['media_list']] if context['star_media']: star_media = get_media_edit_record(context['star_media'], rel_model_tag, rel_object_id) else: star_media = None json = simplejson.dumps({ 'star_media': star_media, 'media_list': media_list, }) return HttpResponse(json, content_type=json_response_mimetype(request)) def get_image_edit_record(image, rel_model_tag, rel_object_id): record = { 'url': image.image.url, 'name': image.title or image.filename, 'type': mimetypes.guess_type(image.image.path)[0] or 'image/png', 'thumbnailUrl': image.thumbnail_2.url, 'size': image.image.size, 'id': image.id, 'media_id': image.media_ptr.id, 'starUrl':reverse('image-star', kwargs={'image_id': image.id, 'rel_model_tag': rel_model_tag, 'rel_object_id': rel_object_id}), } return record def get_video_edit_record(video, rel_model_tag, rel_object_id): record = { 'url': video.embed_url, 'name': video.title or video.url, 'type': None, 'thumbnailUrl': video.thumbnail_2.url, 'size': None, 'id': video.id, 'media_id': video.media_ptr.id, 'starUrl': reverse('video-star', kwargs={'video_id': video.id, 'rel_model_tag': rel_model_tag, 'rel_object_id': rel_object_id}), } return record # models class Media(models.Model, WebModel): title = models.CharField('title', max_length=128, default='', db_index=True, blank=True) event = models.ForeignKey(Event, null=True, default=None, blank=True) crag = models.ForeignKey(Crag, null=True, default=None, blank=True) members = models.ManyToManyField(Member, blank=True) added_by = models.ForeignKey(Member, related_name='added_images') date_added = models.DateTimeField('date added', auto_now_add=True, null=True, default=None, editable=False) class Image(Media): image = ProcessedImageField(upload_to='uploads', processors=[ResizeToFit(width=1024, height=1024, upscale=False)], format='JPEG', options={'quality': 75}) thumbnail_1 = ImageSpecField(source='image', processors=[SmartResize(width=178, height=134)], format='JPEG', options={'quality': 75}) thumbnail_2 = ImageSpecField(source='image', #processors=[SmartResize(width=256, height=192)], processors=[ResizeToFit(height=164)], format='JPEG', options={'quality': 75}) class Video(Media): url = models.URLField('url', max_length=256, default='') embed_url = models.URLField('embed url', max_length=256, default='', blank=True) author = models.CharField('author', max_length=64, default='', blank=True) thumbnail = ProcessedImageField(upload_to='uploads', processors=[ResizeToFit(width=1024, height=1024, upscale=False)], format='JPEG', options={'quality': 75}, null=True, default=None, blank=True) thumbnail_1 = ImageSpecField(source='thumbnail', processors=[SmartResize(width=178, height=134)], format='JPEG', options={'quality': 75}) thumbnail_2 = ImageSpecField(source='thumbnail', #processors=[SmartResize(width=256, height=192)], processors=[ResizeToFit(height=164)], format='JPEG', options={'quality': 75}) class Crag(models.Model, WebModel): name = models.CharField('name', max_length=64, default='', db_index=True) normalized_name = models.CharField('normalized name', max_length=64, default='', editable=False) type = models.IntegerField('crag type', null=True, default=None, choices=crag_types) description = models.TextField('description', default='', blank=True) country = models.ForeignKey('country', null=True, default=None) #TODO: make this not null when db enables it latitude = models.FloatField('latitude', null=True, default=None) longitude = models.FloatField('longitude', null=True, default=None) location_index = FixedCharField('location index', length=24, default='', editable=False, db_index=True) # handled by db, used for marker clustering added_by = models.ForeignKey('member', null=True, default=None) #route_count = models.IntegerField('route count', null=True, default=None, editable=False) date_created = models.DateTimeField('date created', auto_now_add=True, null=True, default=None, editable=False) last_modified = models.DateTimeField('last modified', auto_now=True, null=True, default=None, editable=False) star_image = models.ForeignKey('Image', null=True, default=None, related_name='star_crags', on_delete=models.SET_NULL) star_video = models.ForeignKey('Video', null=True, default=None, related_name='star_crags', on_delete=models.SET_NULL)

    Read the article

  • django/python: is one view that handles two separate models a good idea?

    - by clime
    I am using django multi-table inheritance: Video and Image are models derived from Media. I have implemented two views: video_list and image_list, which are just proxies to media_list. media_list returns images or videos (based on input parameter model) for a certain object, which can be of type Event, Member, or Crag. It alters its behaviour based on input parameter action, which can be either "edit" or "view". The problem is that I need to ask whether the input parameter model contains Video or Image in media_list so that I can do the right thing. Similar condition is also in helper method media_edit_list that is called from the view. I don't particularly like it but the only alternative I can think of is to have separate logic for video_list and image_list and then probably also separate helper methods for videos and images: video_edit_list, image_edit_list, video_view_list, image_view_list. So four functions instead of just two. That I like even less because the video functions would be very similar to the respective image functions. What do you recommend? Here is extract of relevant parts: http://pastebin.com/07t4bdza. I'll also paste the code here: #urls url(r'^media/images/(?P<rel_model_tag>(event|member|crag))/(?P<rel_object_id>\d+)/(?P<action>(view|edit))/$', views.video_list, name='image-list') url(r'^media/videos/(?P<rel_model_tag>(event|member|crag))/(?P<rel_object_id>\d+)/(?P<action>(view|edit))/$', views.image_list, name='video-list') #views def image_list(request, rel_model_tag, rel_object_id, action): return media_list(request, Image, rel_model_tag, rel_object_id, action) def video_list(request, rel_model_tag, rel_object_id, action): return media_list(request, Video, rel_model_tag, rel_object_id, action) def media_list(request, model, rel_model_tag, rel_object_id, action): rel_model = tag_to_model(rel_model_tag) rel_object = get_object_or_404(rel_model, pk=rel_object_id) if model == Image: star_media = rel_object.star_image else: star_media = rel_object.star_video filter_params = {} if rel_model == Event: filter_params['media__event'] = rel_object_id elif rel_model == Member: filter_params['media__members'] = rel_object_id elif rel_model == Crag: filter_params['media__crag'] = rel_object_id media_list = model.objects.filter(~Q(id=star_media.id)).filter(**filter_params).order_by('media__date_added').all() context = { 'media_list': media_list, 'star_media': star_media, } if action == 'edit': return media_edit_list(request, model, rel_model_tag, rel_model_id, context) return media_view_list(request, model, rel_model_tag, rel_model_id, context) def media_view_list(request, model, rel_model_tag, rel_object_id, context): if request.is_ajax(): context['base_template'] = 'boxes/base-lite.html' return render(request, 'media/list-items.html', context) def media_edit_list(request, model, rel_model_tag, rel_object_id, context): if model == Image: get_media_record = get_image_record else: get_media_record = get_video_record media_list = [get_media_record(media, rel_model_tag, rel_object_id) for media in context['media_list']] if context['star_media']: star_media = get_media_record(star_media, rel_model_tag, rel_object_id) star_media['starred'] = True else: star_media = None json = simplejson.dumps({ 'star_media': star_media, 'media_list': media_list, }) return HttpResponse(json, content_type=json_response_mimetype(request)) # models class Media(models.Model, WebModel): title = models.CharField('title', max_length=128, default='', db_index=True, blank=True) event = models.ForeignKey(Event, null=True, default=None, blank=True) crag = models.ForeignKey(Crag, null=True, default=None, blank=True) members = models.ManyToManyField(Member, blank=True) added_by = models.ForeignKey(Member, related_name='added_images') date_added = models.DateTimeField('date added', auto_now_add=True, null=True, default=None, editable=False) def __unicode__(self): return self.title def get_absolute_url(self): return self.image.url if self.image else self.video.embed_url class Image(Media): image = ProcessedImageField(upload_to='uploads', processors=[ResizeToFit(width=1024, height=1024, upscale=False)], format='JPEG', options={'quality': 75}) thumbnail_1 = ImageSpecField(source='image', processors=[SmartResize(width=178, height=134)], format='JPEG', options={'quality': 75}) thumbnail_2 = ImageSpecField(source='image', #processors=[SmartResize(width=256, height=192)], processors=[ResizeToFit(height=164)], format='JPEG', options={'quality': 75}) class Video(Media): url = models.URLField('url', max_length=256, default='') embed_url = models.URLField('embed url', max_length=256, default='', blank=True) author = models.CharField('author', max_length=64, default='', blank=True) thumbnail = ProcessedImageField(upload_to='uploads', processors=[ResizeToFit(width=1024, height=1024, upscale=False)], format='JPEG', options={'quality': 75}, null=True, default=None, blank=True) thumbnail_1 = ImageSpecField(source='thumbnail', processors=[SmartResize(width=178, height=134)], format='JPEG', options={'quality': 75}) thumbnail_2 = ImageSpecField(source='thumbnail', #processors=[SmartResize(width=256, height=192)], processors=[ResizeToFit(height=164)], format='JPEG', options={'quality': 75}) class Crag(models.Model, WebModel): name = models.CharField('name', max_length=64, default='', db_index=True) normalized_name = models.CharField('normalized name', max_length=64, default='', editable=False) type = models.IntegerField('crag type', null=True, default=None, choices=crag_types) description = models.TextField('description', default='', blank=True) country = models.ForeignKey('country', null=True, default=None) #TODO: make this not null when db enables it latitude = models.FloatField('latitude', null=True, default=None) longitude = models.FloatField('longitude', null=True, default=None) location_index = FixedCharField('location index', length=24, default='', editable=False, db_index=True) # handled by db, used for marker clustering added_by = models.ForeignKey('member', null=True, default=None) #route_count = models.IntegerField('route count', null=True, default=None, editable=False) date_created = models.DateTimeField('date created', auto_now_add=True, null=True, default=None, editable=False) last_modified = models.DateTimeField('last modified', auto_now=True, null=True, default=None, editable=False) star_image = models.OneToOneField('Image', null=True, default=None, related_name='star_crags', on_delete=models.SET_NULL) star_video = models.OneToOneField('Video', null=True, default=None, related_name='star_crags', on_delete=models.SET_NULL)

    Read the article

  • Are there Negative Impact of opensource on commercial environment?

    - by Lostsoul
    I know this is not a good fit for Stack Overflow but wasn't sure if it was good for this site also so let me know if its not and I'll delete it. I love programming for fun but my role in my company is not technical. I have always loved the hacker culture and have been trying to drive that openness within my company from day one. My company has a very broad range of products and there are a few that are not strategic to us so I wanted to open source them (so we can focus on what makes us unique and open source the products that every firm has). Our industry does not open source(we would be the first firm to try this) and the feedback I'm getting from my management team is either 1) we'll destroy the industry or 2) all competitive commercial firms will unite against us and we'll be wiped out either way. I disagreed on both points because I think transparency will only grow our industry and our firm (think of McDonalds/KFC sharing their recipe openly, people may copy you, competitors may target you, but customers also may feel more comfortable buying your product. The value add, I believe, is in the delivery and experience not in hoarding the recipe). It's a big battle in my firm right now between the IT people who have seen the positive effects of sharing and the business people who think we'll be giving up everything (they prefer we sell parts we want to opensource, but in their defense this is standard when divesting something). Our industry is very secretive and I don't want to put anyone(even my competitors employees) out of a job yet I don't want to protect inefficient people by not being open with everyone. Yet I've seen so many amazing technologies created in interesting ways just by giving people freedom to take apart code and put it back together. I'm interested in hearing people's thoughts(doesn't have to be to my specific situation, I'm looking for the general lessons). Its a very stressful decision(but one I feel I must make) because if we go the open source route then there will be no going back. So what are your thoughts? Does open sourcing apply generally or is it only really applicable to software? Is it overall good for people in the industry and outside? I'm actually more interested in the negativeness effects(although positive are welcomed as well) Update: Long story short, although code is involved this is not so much about code as it is more about the idea of open sourcing. We are a mid sized quant hedge fund. We have some unique strategies but also have the standard long/short, arbitrage, global macro, etc.. funds. We are keeping the unique funds we have but the other stuff that everyone else has we are considering open sourcing (We have put in years of work & millions of dollars into. Our funds is pretty popular and our performance is either in first or second quartile so I suspect there will be interest but I don't know to what extent). The goal is not to get a community to work for us or anything, the goal is to let anyone who wants to tinker with it do so and create anything they want (it will not be part of our product line although I may unofficially allocate some our of staff's time to assist any community that grows). Although the code base is quite large, the value in this is the industry knowledge and approaches we have acquired (there are many books on artificial intelligence and quant trading but they are often years behind what's really going on as most firms forbid their staff from discussing what they are doing). We are also considering after we move our clients out to let the software still run and output the resulting portfolios for free as well so people can at least see the results(as long as we have avail. infrastructure). I think our main choices are, we can continue to fight for market share in a products that are becoming commoditized, we can shut the funds/products down(and keep the code but no one outside of our firm will ever learn from it) or we can open source it and let people do what they want. By open sourcing it, my idea is that the talent pool in the industry will grow because right now most of our hires have the same background (CFA, MBA, similar school, same experience,etc.. because we can't spend time training people so the industry 'standardizes' most people and thus the firms themselves start to look/act similar) but this may allow us to identify talent that has never been in the industry before (if we put a GPU license then as people learn from what we did, we can learn from what they do as well and maybe apply it to other areas of our firm). I see a lot of benefits but not many negatives while my peers at the company see the opposite.

    Read the article

  • Benchmark MySQL Cluster using flexAsynch: No free node id found for mysqld(API)?

    - by quanta
    I am going to benchmark MySQL Cluster using flexAsynch follow this guide, details as below: mkdir /usr/local/mysqlc732/ cd /usr/local/src/mysql-cluster-gpl-7.3.2 cmake . -DCMAKE_INSTALL_PREFIX=/usr/local/mysqlc732/ -DWITH_NDB_TEST=ON make make install Everything works fine until this step: # /usr/local/mysqlc732/bin/flexAsynch -t 1 -p 80 -l 2 -o 100 -c 100 -n FLEXASYNCH - Starting normal mode Perform benchmark of insert, update and delete transactions 1 number of concurrent threads 80 number of parallel operation per thread 100 transaction(s) per round 2 iterations Load Factor is 80% 25 attributes per table 1 is the number of 32 bit words per attribute Tables are with logging Transactions are executed with hint provided No force send is used, adaptive algorithm used Key Errors are disallowed Temporary Resource Errors are allowed Insufficient Space Errors are disallowed Node Recovery Errors are allowed Overload Errors are allowed Timeout Errors are allowed Internal NDB Errors are allowed User logic reported Errors are allowed Application Errors are disallowed Using table name TAB0 NDBT_ProgramExit: 1 - Failed ndb_cluster.log: WARNING -- Failed to allocate nodeid for API at 127.0.0.1. Returned eror: 'No free node id found for mysqld(API).' I also have recompiled with -DWITH_DEBUG=1 -DWITH_NDB_DEBUG=1. How can I run flexAsynch in the debug mode? # /usr/local/mysqlc732/bin/flexAsynch -h FLEXASYNCH Perform benchmark of insert, update and delete transactions Arguments: -t Number of threads to start, default 1 -p Number of parallel transactions per thread, default 32 -o Number of transactions per loop, default 500 -l Number of loops to run, default 1, 0=infinite -load_factor Number Load factor in index in percent (40 -> 99) -a Number of attributes, default 25 -c Number of operations per transaction -s Size of each attribute, default 1 (PK is always of size 1, independent of this value) -simple Use simple read to read from database -dirty Use dirty read to read from database -write Use writeTuple in insert and update -n Use standard table names -no_table_create Don't create tables in db -temp Create table(s) without logging -no_hint Don't give hint on where to execute transaction coordinator -adaptive Use adaptive send algorithm (default) -force Force send when communicating -non_adaptive Send at a 10 millisecond interval -local 1 = each thread its own node, 2 = round robin on node per parallel trans 3 = random node per parallel trans -ndbrecord Use NDB Record -r Number of extra loops -insert Only run inserts on standard table -read Only run reads on standard table -update Only run updates on standard table -delete Only run deletes on standard table -create_table Only run Create Table of standard table -drop_table Only run Drop Table on standard table -warmup_time Warmup Time before measurement starts -execution_time Execution Time where measurement is done -cooldown_time Cooldown time after measurement completed -table Number of standard table, default 0

    Read the article

  • Can this be imporved? Scrubing of dangerous html tags.

    - by chobo2
    Hi I been finding that for something that I consider pretty import there is very little information or libraries on how to deal with this problem. I found this while searching. I really don't know all the million ways that a hacker could try to insert the dangerous tags. I have a rich html editor so I need to keep non dangerous tags but strip out bad ones. So is this script missing anything? It uses html agility pack. public string ScrubHTML(string html) { HtmlDocument doc = new HtmlDocument(); doc.LoadHtml(html); //Remove potentially harmful elements HtmlNodeCollection nc = doc.DocumentNode.SelectNodes("//script|//link|//iframe|//frameset|//frame|//applet|//object|//embed"); if (nc != null) { foreach (HtmlNode node in nc) { node.ParentNode.RemoveChild(node, false); } } //remove hrefs to java/j/vbscript URLs nc = doc.DocumentNode.SelectNodes("//a[starts-with(translate(@href, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz'), 'javascript')]|//a[starts-with(translate(@href, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz'), 'jscript')]|//a[starts-with(translate(@href, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz'), 'vbscript')]"); if (nc != null) { foreach (HtmlNode node in nc) { node.SetAttributeValue("href", "#"); } } //remove img with refs to java/j/vbscript URLs nc = doc.DocumentNode.SelectNodes("//img[starts-with(translate(@src, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz'), 'javascript')]|//img[starts-with(translate(@src, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz'), 'jscript')]|//img[starts-with(translate(@src, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz'), 'vbscript')]"); if (nc != null) { foreach (HtmlNode node in nc) { node.SetAttributeValue("src", "#"); } } //remove on<Event> handlers from all tags nc = doc.DocumentNode.SelectNodes("//*[@onclick or @onmouseover or @onfocus or @onblur or @onmouseout or @ondoubleclick or @onload or @onunload]"); if (nc != null) { foreach (HtmlNode node in nc) { node.Attributes.Remove("onFocus"); node.Attributes.Remove("onBlur"); node.Attributes.Remove("onClick"); node.Attributes.Remove("onMouseOver"); node.Attributes.Remove("onMouseOut"); node.Attributes.Remove("onDoubleClick"); node.Attributes.Remove("onLoad"); node.Attributes.Remove("onUnload"); } } // remove any style attributes that contain the word expression (IE evaluates this as script) nc = doc.DocumentNode.SelectNodes("//*[contains(translate(@style, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz'), 'expression')]"); if (nc != null) { foreach (HtmlNode node in nc) { node.Attributes.Remove("stYle"); } } return doc.DocumentNode.WriteTo(); }

    Read the article

  • Can this be improved? Scrubing of dangerous html tags.

    - by chobo2
    I been finding that for something that I consider pretty import there is very little information or libraries on how to deal with this problem. I found this while searching. I really don't know all the million ways that a hacker could try to insert the dangerous tags. I have a rich html editor so I need to keep non dangerous tags but strip out bad ones. So is this script missing anything? It uses html agility pack. public string ScrubHTML(string html) { HtmlDocument doc = new HtmlDocument(); doc.LoadHtml(html); //Remove potentially harmful elements HtmlNodeCollection nc = doc.DocumentNode.SelectNodes("//script|//link|//iframe|//frameset|//frame|//applet|//object|//embed"); if (nc != null) { foreach (HtmlNode node in nc) { node.ParentNode.RemoveChild(node, false); } } //remove hrefs to java/j/vbscript URLs nc = doc.DocumentNode.SelectNodes("//a[starts-with(translate(@href, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz'), 'javascript')]|//a[starts-with(translate(@href, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz'), 'jscript')]|//a[starts-with(translate(@href, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz'), 'vbscript')]"); if (nc != null) { foreach (HtmlNode node in nc) { node.SetAttributeValue("href", "#"); } } //remove img with refs to java/j/vbscript URLs nc = doc.DocumentNode.SelectNodes("//img[starts-with(translate(@src, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz'), 'javascript')]|//img[starts-with(translate(@src, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz'), 'jscript')]|//img[starts-with(translate(@src, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz'), 'vbscript')]"); if (nc != null) { foreach (HtmlNode node in nc) { node.SetAttributeValue("src", "#"); } } //remove on<Event> handlers from all tags nc = doc.DocumentNode.SelectNodes("//*[@onclick or @onmouseover or @onfocus or @onblur or @onmouseout or @ondoubleclick or @onload or @onunload]"); if (nc != null) { foreach (HtmlNode node in nc) { node.Attributes.Remove("onFocus"); node.Attributes.Remove("onBlur"); node.Attributes.Remove("onClick"); node.Attributes.Remove("onMouseOver"); node.Attributes.Remove("onMouseOut"); node.Attributes.Remove("onDoubleClick"); node.Attributes.Remove("onLoad"); node.Attributes.Remove("onUnload"); } } // remove any style attributes that contain the word expression (IE evaluates this as script) nc = doc.DocumentNode.SelectNodes("//*[contains(translate(@style, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz'), 'expression')]"); if (nc != null) { foreach (HtmlNode node in nc) { node.Attributes.Remove("stYle"); } } return doc.DocumentNode.WriteTo(); }

    Read the article

  • MySQL for Excel 1.1.0 GA has been released

    - by Javier Treviño
    The MySQL Windows Experience Team is proud to announce the release of MySQL for Excel version 1.1.0 GA, one of our newest products contained in the MySQL Installer suite. You can download it from our official Downloads page at http://dev.mysql.com/downloads/installer/. The 1.1.0 release of MySQL for Excel introduces the following features: Edit MySQL Data. Edit MySQL Data This may be the coolest feature so far; users will be able to edit the data in a MySQL table using MS Excel in a very friendly and intuitive way.  Edit Data supports inserting new rows, deleting existing rows and updating existing data as easy as playing with data in an Excel’s spreadsheet and pushing changes back to the server.  Also this version contains the following bug fixes: Enabled the following checkboxes in the Append Data's Advanced Options dialog and added code in the Append Data dialog to use the checkboxes as follows: Automatically store the column mapping for the given table     If checked the current mapping will be stored automatically after clicking the Append button if the append operation is successful and there is no mapping for the current connection.schema.table already; the new mapping is stored with a proposed name of Mapping. Reload stored column mapping for the selected table automatically     If checked the first Stored Mapping found where all column names in the source grid match all column names in the target grid is automatically selected and applied when the Append Data dialog is loaded. Fixed code in Append Data that applies a stored column mapping to skip target columns where the associated mapping is empty (saved as a -1). Enclosed the Add-In's startup code in a try-catch block in order to log any possible error thrown during startup; and added information messages to the log at the beginning of the Add-In's startup code and at the end of the shutdown code.  Also changed the wrapper method that calls the MySQLUtility to write messages to the log to make logging easier, thus changed the log call throughout all the code that contains a try-catch block. Added code to the main wix configuration file to check if a newer version is already installed and if so abort the installation Fixed code to refresh the Import Procedure Form's preview grid's data source to repaint its contents every time the Call button is pressed. Added code to re-pull connections after connections are migrated from Excel to Workbench. Fixed code so when the Append Data's Automatic Mapping is performed any subsequent change on a mapping resets the mapping to a Manual Mapping. Added code to the InfoDialog class to set the button text to "Show Details" or "Hide Details" depending on the status of the Details text container. Fixed a GUID in the main wix configuration file so now previous versions are uninstalled during a new installation. Added an option to the Export Data's Advanced Options dialog to remove columns with no data, by default the Export Dialog will only flag those columns as Excluded. Added code to display a warning and paint a column red if the column name in the Export Data dialog is not set, display a warning if the table name is not set, and stack warnings but not display them if a column is Excluded, warnings are displayed normally for columns if they are not Excluded anymore.  Added code to prevent the Append and Export of Data if more than 1 selection is made (selecting more than 1 area holding the Ctrl key while selecting Excel cells). Fixed problem that prevented MySQL for Excel from loading when Display settings in Windows 7 is set to Adjust to Best Performance (Oracle bug 14521405 - UNHANDLED EXCEPTION IS THROWN WHEN LOADING MYSQL FOR EXCEL). Fixed code that renames the auto-generated Primary Key column when the Table name changes since it was not detecting if a column with the same name already existed in the table. The column duplication was not actually happening, it looked that way because the automatically generated PK column was not detecting a column had that same name. Fixed code in Export Data dialog to always set an empty string instead of null to the MySQLDataColumn properties that stores MySQL data types (MySQLDataType, RowsFrom1stDataType and RowsFrom2ndDataType). Added code to display a warning and color red a column which Data Type has not been set by the user or has been manually cleared. Added code to output to the application log exception messages consistently in all places where exceptions are catched. A series of blog posts explaining the new Edit MySQL Data feature and the other existing features are coming in this blog. You can access the MySQL for Excel documentation at http://dev.mysql.com/doc/refman/5.5/en/mysql-for-excel.html You can also post questions on our MySQL for Excel forum found at http://forums.mysql.com/. You can also post questions on our MySQL for Excel forum found at http://forums.mysql.com/. Enjoy and thanks for the support!

    Read the article

  • NoSQL Java API for MySQL Cluster: Questions & Answers

    - by Mat Keep
    The MySQL Cluster engineering team recently ran a live webinar, available now on-demand demonstrating the ClusterJ and ClusterJPA NoSQL APIs for MySQL Cluster, and how these can be used in building real-time, high scale Java-based services that require continuous availability. Attendees asked a number of great questions during the webinar, and I thought it would be useful to share those here, so others are also able to learn more about the Java NoSQL APIs. First, a little bit about why we developed these APIs and why they are interesting to Java developers. ClusterJ and Cluster JPA ClusterJ is a Java interface to MySQL Cluster that provides either a static or dynamic domain object model, similar to the data model used by JDO, JPA, and Hibernate. A simple API gives users extremely high performance for common operations: insert, delete, update, and query. ClusterJPA works with ClusterJ to extend functionality, including - Persistent classes - Relationships - Joins in queries - Lazy loading - Table and index creation from object model By eliminating data transformations via SQL, users get lower data access latency and higher throughput. In addition, Java developers have a more natural programming method to directly manage their data, with a complete, feature-rich solution for Object/Relational Mapping. As a result, the development of Java applications is simplified with faster development cycles resulting in accelerated time to market for new services. MySQL Cluster offers multiple NoSQL APIs alongside Java: - Memcached for a persistent, high performance, write-scalable Key/Value store, - HTTP/REST via an Apache module - C++ via the NDB API for the lowest absolute latency. Developers can use SQL as well as NoSQL APIs for access to the same data set via multiple query patterns – from simple Primary Key lookups or inserts to complex cross-shard JOINs using Adaptive Query Localization Marrying NoSQL and SQL access to an ACID-compliant database offers developers a number of benefits. MySQL Cluster’s distributed, shared-nothing architecture with auto-sharding and real time performance makes it a great fit for workloads requiring high volume OLTP. Users also get the added flexibility of being able to run real-time analytics across the same OLTP data set for real-time business insight. OK – hopefully you now have a better idea of why ClusterJ and JPA are available. Now, for the Q&A. Q & A Q. Why would I use Connector/J vs. ClusterJ? A. Partly it's a question of whether you prefer to work with SQL (Connector/J) or objects (ClusterJ). Performance of ClusterJ will be better as there is no need to pass through the MySQL Server. A ClusterJ operation can only act on a single table (e.g. no joins) - ClusterJPA extends that capability Q. Can I mix different APIs (ie ClusterJ, Connector/J) in our application for different query types? A. Yes. You can mix and match all of the API types, SQL, JDBC, ODBC, ClusterJ, Memcached, REST, C++. They all access the exact same data in the data nodes. Update through one API and new data is instantly visible to all of the others. Q. How many TCP connections would a SessionFactory instance create for a cluster of 8 data nodes? A. SessionFactory has a connection to the mgmd (management node) but otherwise is just a vehicle to create Sessions. Without using connection pooling, a SessionFactory will have one connection open with each data node. Using optional connection pooling allows multiple connections from the SessionFactory to increase throughput. Q. Can you give details of how Cluster J optimizes sharding to enhance performance of distributed query processing? A. Each data node in a cluster runs a Transaction Coordinator (TC), which begins and ends the transaction, but also serves as a resource to operate on the result rows. While an API node (such as a ClusterJ process) can send queries to any TC/data node, there are performance gains if the TC is where most of the result data is stored. ClusterJ computes the shard (partition) key to choose the data node where the row resides as the TC. Q. What happens if we perform two primary key lookups within the same transaction? Are they sent to the data node in one transaction? A. ClusterJ will send identical PK lookups to the same data node. Q. How is distributed query processing handled by MySQL Cluster ? A. If the data is split between data nodes then all of the information will be transparently combined and passed back to the application. The session will connect to a data node - typically by hashing the primary key - which then interacts with its neighboring nodes to collect the data needed to fulfil the query. Q. Can I use Foreign Keys with MySQL Cluster A. Support for Foreign Keys is included in the MySQL Cluster 7.3 Early Access release Summary The NoSQL Java APIs are packaged with MySQL Cluster, available for download here so feel free to take them for a spin today! Key Resources MySQL Cluster on-line demo  MySQL ClusterJ and JPA On-demand webinar  MySQL ClusterJ and JPA documentation MySQL ClusterJ and JPA whitepaper and tutorial

    Read the article

  • Bizarre and very specific Internet connection loss

    - by Synetech
    Yesterday (Friday, September 21, 2012), my Internet connection started acting up. After some testing, I confirmed a very specific and baffling set of symptoms: Internet connection goes away every 25-35 minutes (I did not confirm the exact interval, but it seems to be about 30 mins.) Only some protocols are affected; HTTP*, P2P, etc. stop working; FTP, etc. continue to work When it’s stopped, cannot even ping router or cable-modem IPs or view their firmware pages Domain-names and IPs are irrelevant (for protocols that stop working, neither work, for those that still work, both work) Resetting router fixes it for another 30 minutes Keeping the connection idle or active doesn’t seem to make a difference (nor the bandwidth usage in that period) Connecting directly to cable-modem allows it to work indefinitely Disconnecting the router from the cable-modem works indefinitely (no Internet connection obviously, but can still access router IP and firmware page) Connecting the router to the cable-modem, but putting the modem on standby also works indefinitely Same problem with both a wireless laptop and wired (on any port) desktop (both Windows 7; will try to test Windows XP when possible) Nothing had changed in the days leading up to the issue. No modifications to the networking configuration or the router; there were not even any Windows updates except for an MSSE definition update. Waiting does not fix it, nor does any amount of fiddling with anything; only resetting the router fixes it for 30 minutes (resetting the cable-modem doesn't work either) I tried cleaning the pins in the router’s plugs, but that didn’t help, which was not really a surprise since I was not getting a lost connection error. Obviously my first thought was that the router was having a problem, and this is borne out by some tests. The problem is that when it drops, it is not a full drop since I can still do things like ftp ftp.mcafee.com and such which means that the connection and DNS are still working. Moreover, if it were the router, then why does it stay alive indefinitely when not connected to the cable-modem (i.e., no outside influence)? The problem doesn't seem to be either the cable-modem nor the router, but rather an interaction between the two, like something from the outside (port scan? hacker? ISP?) that is triggering a problem in the router. I see that there have been a couple of vulnerabilities for the DI-524, but those were a while back and should be fixed since I have the last firmware for it. I don’t think it’s my ISP (Rogers) since I have been using the router for several years without problem and can connect indefinitely when bypassing it. But I can’t rule them out since that is one of the only possible things that could have suddenly changed. Does anybody have any ideas of explanations, fixed, or tests? (I note that when I opened the router, I heard a very high-pitched noise from somewhere near the capacitors/ferrite ring which I don’t think I heard the last time I opened it a few years ago, but then if it were that, then why would it affect only a very small, specific set of functions?)

    Read the article

  • fail2ban custom action to permanent ban IPs from China

    - by John Magnolia
    When a IP address gets banned how can I check if the banned IP address is from China. If yes, then add it to the permanent ban list. I have found this nice guide which write the banned IP to file. Reason: I am getting a lot of brute force attacks from China daily, thankfully fail2ban is helping restrict this although they appear to be getting worse and they are just changing their IP Address. Or even better would be if there was a maintained database of known hacker IP addresses. Example 1 Hi, The IP 60.169.78.77 has just been banned by Fail2Ban after 4 attempts against vsftpd. Here are more information about 60.169.78.77: % [whois.apnic.net node-7] % Whois data copyright terms http://www.apnic.net/db/dbcopyright.html inetnum: 60.166.0.0 - 60.175.255.255 netname: CHINANET-AH descr: CHINANET anhui province network descr: China Telecom descr: A12,Xin-Jie-Kou-Wai Street descr: Beijing 100088 country: CN admin-c: CH93-AP tech-c: JW89-AP mnt-by: APNIC-HM mnt-routes: MAINT-CHINANET-AH mnt-lower: MAINT-CHINANET-AH status: ALLOCATED PORTABLE changed: [email protected] 20040721 source: APNIC person: Chinanet Hostmaster nic-hdl: CH93-AP e-mail: [email protected] address: No.31 ,jingrong street,beijing address: 100032 phone: +86-10-58501724 fax-no: +86-10-58501724 country: CN changed: [email protected] 20070416 mnt-by: MAINT-CHINANET source: APNIC person: Jinneng Wang address: 17/F, Postal Building No.120 Changjiang address: Middle Road, Hefei, Anhui, China country: CN phone: +86-551-2659073 fax-no: +86-551-2659287 e-mail: [email protected] nic-hdl: JW89-AP mnt-by: MAINT-NEW changed: [email protected] 19990818 source: APNIC Regards, Fail2Ban Example 2 Hi, The IP 60.169.78.81 has just been banned by Fail2Ban after 4 attempts against vsftpd. Here are more information about 60.169.78.81: % [whois.apnic.net node-6] % Whois data copyright terms http://www.apnic.net/db/dbcopyright.html inetnum: 60.166.0.0 - 60.175.255.255 netname: CHINANET-AH descr: CHINANET anhui province network descr: China Telecom descr: A12,Xin-Jie-Kou-Wai Street descr: Beijing 100088 country: CN admin-c: CH93-AP tech-c: JW89-AP mnt-by: APNIC-HM mnt-routes: MAINT-CHINANET-AH mnt-lower: MAINT-CHINANET-AH status: ALLOCATED PORTABLE changed: [email protected] 20040721 source: APNIC person: Chinanet Hostmaster nic-hdl: CH93-AP e-mail: [email protected] address: No.31 ,jingrong street,beijing address: 100032 phone: +86-10-58501724 fax-no: +86-10-58501724 country: CN changed: [email protected] 20070416 mnt-by: MAINT-CHINANET source: APNIC person: Jinneng Wang address: 17/F, Postal Building No.120 Changjiang address: Middle Road, Hefei, Anhui, China country: CN phone: +86-551-2659073 fax-no: +86-551-2659287 e-mail: [email protected] nic-hdl: JW89-AP mnt-by: MAINT-NEW changed: [email protected] 19990818 source: APNIC Regards, Fail2Ban Example 3 Hi, The IP 222.133.244.99 has just been banned by Fail2Ban after 4 attempts against vsftpd. Here are more information about 222.133.244.99: % [whois.apnic.net node-6] % Whois data copyright terms http://www.apnic.net/db/dbcopyright.html inetnum: 222.133.244.96 - 222.133.244.127 netname: LCZFFHQ country: CN descr: liaochenggovermentfanghuoqiang admin-c: DS95-AP tech-c: DS95-AP status: ASSIGNED NON-PORTABLE changed: [email protected] 20060122 mnt-by: MAINT-CNCGROUP-SD source: APNIC route: 222.132.0.0/14 descr: CNC Group CHINA169 Shandong Province Network country: CN origin: AS4837 mnt-by: MAINT-CNCGROUP-RR changed: [email protected] 20060118 source: APNIC person: Data Communication Bureau Shandong nic-hdl: DS95-AP e-mail: [email protected] address: No.77 Jingsan Road,Jinan,Shandong,P.R.China phone: +86-531-6052611 fax-no: +86-531-6052414 country: CN changed: [email protected] 20050330 mnt-by: MAINT-CNCGROUP-SD source: APNIC Regards, Fail2Ban

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40  | Next Page >