Search Results

Search found 8605 results on 345 pages for 'general dynamics'.

Page 328/345 | < Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >

  • C# need help debugging socks5-connection attemp

    - by Chuck
    Hi, I've written the following code to (successfully) connect to a socks5 proxy. I send a user/pw auth and get an OK reply (0x00), but as soon as I tell the proxy to connect to whichever ip:port, it gives me 0x01 (general error). Socket socket5_proxy = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); IPEndPoint proxyEndPoint = new IPEndPoint(IPAddress.Parse("111.111.111.111"), 1080); // proxy ip, port. fake for posting purposes. socket5_proxy.Connect(proxyEndPoint); byte[] init_socks_command = new byte[4]; init_socks_command[0] = 0x05; init_socks_command[1] = 0x02; init_socks_command[2] = 0x00; init_socks_command[3] = 0x02; socket5_proxy.Send(init_socks_command); byte[] socket_response = new byte[2]; int bytes_recieved = socket5_proxy.Receive(socket_response, 2, SocketFlags.None); if (socket_response[1] == 0x02) { byte[] temp_bytes; string socks5_user = "foo"; string socks5_pass = "bar"; byte[] auth_socks_command = new byte[3 + socks5_user.Length + socks5_pass.Length]; auth_socks_command[0] = 0x05; auth_socks_command[1] = Convert.ToByte(socks5_user.Length); temp_bytes = Encoding.Default.GetBytes(socks5_user); temp_bytes.CopyTo(auth_socks_command, 2); auth_socks_command[2 + socks5_user.Length] = Convert.ToByte(socks5_pass.Length); temp_bytes = Encoding.Default.GetBytes(socks5_pass); temp_bytes.CopyTo(auth_socks_command, 3 + socks5_user.Length); socket5_proxy.Send(auth_socks_command); socket5_proxy.Receive(socket_response, 2, SocketFlags.None); if (socket_response[1] != 0x00) return; byte[] connect_socks_command = new byte[10]; connect_socks_command[0] = 0x05; connect_socks_command[1] = 0x02; // streaming connect_socks_command[2] = 0x00; connect_socks_command[3] = 0x01; // ipv4 temp_bytes = IPAddress.Parse("222.222.222.222").GetAddressBytes(); // target connection. fake ip, obviously temp_bytes.CopyTo(connect_socks_command, 4); byte[] portBytes = BitConverter.GetBytes(8888); connect_socks_command[8] = portBytes[0]; connect_socks_command[9] = portBytes[1]; socket5_proxy.Send(connect_socks_command); socket5_proxy.Receive(socket_response); if (socket_response[1] != 0x00) MessageBox.Show("Damn it"); // I always end here, 0x01 I've used this as a reference: http://en.wikipedia.org/wiki/SOCKS#SOCKS_5 Have I completely misunderstood something here? How I see it, I can connect to the socks5 fine. I can authenticate fine. But I/the proxy can't "do" anything? Yes, I know the proxy works. Yes, the target ip is available and yes the target port is open/responsive. I get 0x01 no matter what I try to connect to. Any help is VERY MUCH appreciated! Thanks, Chuck

    Read the article

  • Should we develop a custom membership provider in this case?

    - by Allen
    I'll be adding a bounty to this, probably 200, more if you guys think its appropriate. I wont accept an answer until I can add a bounty so feel free to go ahead and answer now Summary Long story short, we've been tasked with gutting the authentication and authorization parts of a fairly old and bloated asp.net application that previously had all of these components written from scratch. Since our application isn't a typical one, and none of us have experience in asp.net's built in membership provider stuff, we're not sure if we should roll our own authentication and authorization again or if we should try to work within the asp.net membership provider mindset and develop our own membership provider. Our Application We have a fairly old asp.net application that gets installed at customer locations to service clients on a LAN. Admins create users (users do not sign up) and depending on the install, we may have the software integrated with LDAP. Currently, the LDAP integration bulk-imports the users to our database and when they login, it authenticates against LDAP so we dont have to manage their passwords. Nothing amazing there. Admins can assign users to 1 group and they can change the authorization of that group to manage access to various parts of the software. Groups are maintained by Admins (web based UI) and as said earlier, granted / denied permissions to certain functionality within the application. All this was completely written from the ground up without using any of the built in .net authorization or authentication. We literally have IsLoggedIn() methods that check for login and redirect to our login page if they aren't. Our Rewrite We've been tasked to integrate more tightly with LDAP, they want us to tie groups in our application to groups (or whatever types of containers that LDAP uses) in LDAP so that when a customer opt's to use our LDAP integration, they dont have to manage their users in LDAP AND in our application. The new way, they will simply create users in LDAP, add them to Groups in LDAP and our application will see that they belong to the appropriate LDAP group and authenticate and authorize them. In addition, we've been granted the go ahead to completely rip out the User authentication and authorization code and completely re-do it. Our Problem The problem is that none of us have any experience with asp.net membership provider functionality. The little bit of exposure I have to it makes me worry that it was not intended to be used for an application such as ours. Though, developing our own ASP.NET Membership Provider and Role Manager sounds like it would be a great experience and most likely the appropriate thing to do. Basically, I'm looking for advice, should we be using the ASP.NET Membership provider & Role Management API or should we continue to roll our own? I know this decision will be influenced by our requirements so I'm going over them below Our Requirements Just a quick n dirty list Maintain the ability to have a db of users and authenticate them and give admins (only, not users) the ability to CRUD users Allow the site to integrate with LDAP, when this is chosen, they don't want any users stored in the DB, only the relationship between Groups as they exist in our app / db and the Groups/Containers as they exist in LDAP. .net 3.5 is being used (mix of asp.net webforms and asp.net mvc) Has to work in ASP.NET and ASP.NET MVC (shouldn't be a problem I'm guessing) This can't be user centric, administrators need to be the only ones that CRUD (or import via ldap) users and groups We have to be able to Auth via LDAP when its configured to do so I always try to monitor my questions closely so feel free to ask for more info. Also, as a general summary of what I'm looking for in an answer is just. "You should/shouldn't use xyz, here's why". Links regarding asp.net membership provider and role management stuff are very welcome, most of the stuff I'm finding is 5+ years old. Edit: Added some stuff to "Our Rewrite"

    Read the article

  • How do I solve an unresolved external when using C++ Builder packages?

    - by David M
    I'm experimenting with reconfiguring my application to make heaving use of packages. Both I and another developer running a similar experiment are running into a bit of trouble when linking using several different packages. We're probably both doing something wrong, but goodness knows what :) The situation is this: The first package, PackageA.bpl, contains C++ class FooA. The class is declared with the PACKAGE directive. The second package, PackageB.bpl, contains a class inheriting from FooA, called FooB. It includes FooB.h, and the package is built using runtime packages, and links to PackageA by adding a reference to PackageA.bpi. When building PackageB, it compiles fine but linking fails with a number of unresolved externals, the first few of which are: [ILINK32 Error] Error: Unresolved external '__tpdsc__ FooA' referenced from C:\blah\FooB.OBJ [ILINK32 Error] Error: Unresolved external 'FooA::' referenced from C:\blah\FooB.OBJ [ILINK32 Error] Error: Unresolved external '__fastcall FooA::~FooA()' referenced from blah\FooB.OBJ etc. Running TDump on PackageA.bpl shows: Exports from PackageA.bpl 14 exported name(s), 14 export addresse(s). Ordinal base is 1. Sorted by Name: RVA Ord. Hint Name -------- ---- ---- ---- 00002A0C 8 0000 __tpdsc__ FooA 00002AD8 10 0001 __linkproc__ FooA::Finalize 00002AC8 9 0002 __linkproc__ FooA::Initialize 00002E4C 12 0003 __linkproc__ PackageA::Finalize 00002E3C 11 0004 __linkproc__ PackageA::Initialize 00006510 14 0007 FooA:: 00002860 5 0008 FooA::FooA(FooA&) 000027E4 4 0009 FooA::FooA() 00002770 3 000A __fastcall FooA::~FooA() 000028DC 6 000B __fastcall FooA::Method1() const 000028F4 7 000C __fastcall FooA::Method2() const 00001375 2 000D Finalize 00001368 1 000E Initialize 0000610C 13 000F ___CPPdebugHook So the class definitely seems to be exported and available to link. I can see entries for the specific things ILink32 says it's looking for and not finding. Running TDump on the BPI file shows similar entries. Other info The class does descend from TObject, though originally before refactoring into packages it was a normal C++ class. (More detail below. It seems "safer" using VCL-style classes when trying to solve problems with a very Delphi-ish thing like this anyway. Changing this only changes the order of unresolved externals to first not find Method1 and Method2, then others.) Declaration for FooA: class PACKAGE FooA: public TObject { public: FooA(); virtual __fastcall ~FooA(); FooA(const FooA&); virtual __fastcall long Method1() const; virtual __fastcall long Method2() const; }; and FooB: class FooB: public FooA { public: FooB(); virtual __fastcall ~FooB(); ... other methods... }; All methods definitely are implemented in the .cpp files, so it's not not finding them because they don't exist! The .cpp files also contain #pragma package(smart_init) near the top, under the includes. Questions that might help... Are packages reliable using C++, or are they only useable with Delphi code? Is linking to the first package by adding a reference to its BPI correct - is that how you're supposed to do it? I could use a LIB but it seems to make the second package much larger, and I suspect it's statically linking in the contents of the first. Can we use the PACKAGE directive only on TObject-derived classes? There is no compiler warning using it on standard C++ classes. Is splitting code into packages the best way to achieve the goal of isolating code and communicating through defined layers / interfaces? I've been investigating this path because it seems to be the C++Builder / Delphi Way, and if it worked it looks attractive. But are there better alternatives? I'm very new to using packages and have only known about them through using components before. Any general words of advice would be great! We're using C++Builder 2010. I've fabricated the class and method names in the above code examples, but other than that the details are exactly what we're seeing. Cheers, David

    Read the article

  • python: what are efficient techniques to deal with deeply nested data in a flexible manner?

    - by AlexandreS
    My question is not about a specific code snippet but more general, so please bear with me: How should I organize the data I'm analyzing, and which tools should I use to manage it? I'm using python and numpy to analyse data. Because the python documentation indicates that dictionaries are very optimized in python, and also due to the fact that the data itself is very structured, I stored it in a deeply nested dictionary. Here is a skeleton of the dictionary: the position in the hierarchy defines the nature of the element, and each new line defines the contents of a key in the precedent level: [AS091209M02] [AS091209M01] [AS090901M06] ... [100113] [100211] [100128] [100121] [R16] [R17] [R03] [R15] [R05] [R04] [R07] ... [1263399103] ... [ImageSize] [FilePath] [Trials] [Depth] [Frames] [Responses] ... [N01] [N04] ... [Sequential] [Randomized] [Ch1] [Ch2] Edit: To explain a bit better my data set: [individual] ex: [AS091209M02] [imaging session (date string)] ex: [100113] [Region imaged] ex: [R16] [timestamp of file] ex [1263399103] [properties of file] ex: [Responses] [regions of interest in image ] ex [N01] [format of data] ex [Sequential] [channel of acquisition: this key indexes an array of values] ex [Ch1] The type of operations I perform is for instance to compute properties of the arrays (listed under Ch1, Ch2), pick up arrays to make a new collection, for instance analyze responses of N01 from region 16 (R16) of a given individual at different time points, etc. This structure works well for me and is very fast, as promised. I can analyze the full data set pretty quickly (and the dictionary is far too small to fill up my computer's ram : half a gig). My problem comes from the cumbersome manner in which I need to program the operations of the dictionary. I often have stretches of code that go like this: for mk in dic.keys(): for rgk in dic[mk].keys(): for nk in dic[mk][rgk].keys(): for ik in dic[mk][rgk][nk].keys(): for ek in dic[mk][rgk][nk][ik].keys(): #do something which is ugly, cumbersome, non reusable, and brittle (need to recode it for any variant of the dictionary). I tried using recursive functions, but apart from the simplest applications, I ran into some very nasty bugs and bizarre behaviors that caused a big waste of time (it does not help that I don't manage to debug with pdb in ipython when I'm dealing with deeply nested recursive functions). In the end the only recursive function I use regularly is the following: def dicExplorer(dic, depth = -1, stp = 0): '''prints the hierarchy of a dictionary. if depth not specified, will explore all the dictionary ''' if depth - stp == 0: return try : list_keys = dic.keys() except AttributeError: return stp += 1 for key in list_keys: else: print '+%s> [\'%s\']' %(stp * '---', key) dicExplorer(dic[key], depth, stp) I know I'm doing this wrong, because my code is long, noodly and non-reusable. I need to either use better techniques to flexibly manipulate the dictionaries, or to put the data in some database format (sqlite?). My problem is that since I'm (badly) self-taught in regards to programming, I lack practical experience and background knowledge to appreciate the options available. I'm ready to learn new tools (SQL, object oriented programming), whatever it takes to get the job done, but I am reluctant to invest my time and efforts into something that will be a dead end for my needs. So what are your suggestions to tackle this issue, and be able to code my tools in a more brief, flexible and re-usable manner?

    Read the article

  • using return values from a c# .net made component build as com+

    - by YvesR
    Hello, so far I made a component in C# .NET 4 and use System.EnterpriseServices to make it COM visible. I want to develop business methods in C#, but I still need to access them from classic ASP (vbscript). So far so good, everything works fine (exept function overloading :)). Now I made a test class to get more expirience with return code. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.EnterpriseServices; using System.Management; namespace iController { /// /// The tools class provides additional functions for general use in out of context to other classes of the iController. /// public class tools :ServicedComponent { #region publich methods public bool TestBoolean() { return true; } public string TestString() { return "this is a string"; } public int TestInteger() { return 32; } public double TestDouble() { return 32.32; } public float TestFloat() { float ret = 2 ^ 16; return ret; } public string[] TestArray() { string[] ret = {"0","1"}; return ret; } public int[][] TestJaggedArray() { int[][] jaggedArray = new int[3][]; jaggedArray[0] = new int[] { 1, 3, 5, 7, 9 }; jaggedArray[1] = new int[] { 0, 2, 4, 6 }; jaggedArray[2] = new int[] { 11, 22 }; return jaggedArray; } public Dictionary<string, string> TestDictionary() { Dictionary<string, string> ret = new Dictionary<string,string>(); ret.Add("test1","val1"); ret.Add("test2","val2"); return ret; } #endregion } } Then I just made a simple vbscript file to run it with cscript.exe for testing porpuse. Dim oTools : Set oTools = CreateObject("iController.tools") WScript.StdOut.WriteLine TypeName(oTools.TestBoolean()) & " - " & oTools.TestBoolean() WScript.StdOut.WriteLine TypeName(oTools.TestString()) & " - " & oTools.TestString() WScript.StdOut.WriteLine TypeName(oTools.TestInteger()) & " - " & oTools.TestInteger() WScript.StdOut.WriteLine TypeName(oTools.TestDouble()) & " - " & oTools.TestDouble() WScript.StdOut.WriteLine TypeName(oTools.TestFloat()) & " - " & oTools.TestFloat() test = oTools.TestArray() WScript.StdOut.WriteLine TypeName(test) WScript.StdOut.WriteLine UBound(test) For i = 0 To UBound(test) WScript.StdOut.WriteLine test(i) Next For Each item IN test WScript.StdOut.WriteLine item Next test = oTools.TestJaggedArray() WScript.StdOut.WriteLine TypeName(test) For Each item IN test WScript.StdOut.WriteLine test & " - " & test.Item(item) Next test = oTools.TestDictionary() WScript.StdOut.WriteLine TypeName(test) For Each item IN test WScript.StdOut.WriteLine test & " - " & test.Item(item) Next What works fine: string, int, foat, double When it comes to array, jaggedarray or dictionaries I get a type mismatch. VarType is 13 object for the dictionary e.g. but this dict seems to be different then the Scripting.Dictionary. I checked codeproject.com and stackoverflow all day and didn't find any hints exept some thread on stackoverflow where someone mentioned there is a need to created a IDispatch interface. So anyone ever had the same issue and can help me or give me some hints I can go on with?

    Read the article

  • Wordpress > Custom wp-archives list by year and category with post count in sidebar

    - by David
    So I have been trying to add a custom sidebar archives section to this Wordpress Theme. I am trying to get two separate yearly archive sections, one for category A and one for category B. I need to get a post count and display it as well, to the left of "articles". This is the format I have been trying to get in the sidebar: Category 1 2010 - 5 articles 2009 - 4 articles 2008 - 6 articles 2007 - 5 articles Category 2 2010 - 8 articles 2009 - 3 articles 2008 - 7 articles 2007 - 5 articles This code is pulling the yearly archive, but the links to the yearly archives do not filter by category. However, if Category 1 does not have posts in 2008, 2008 would not show. So I feel like I am really close, but there is something wrong here in the code. I have also been unsuccessful in pulling the post count for the year/category either. Here is what I get: Category 1 2010 - articles (these links all take you to the general yearly archive page, not category specific) 2009 - articles 2007 - articles Category 2 2010 - articles 2009 - articles 2008 - articles 2007 - articles Here is are the functions I am using in my functions.php file, any idea what I am doing wrong? <?php function company_below_sidebar() { global $cat; ?> <div id="press"> <h2>Press</h2> <ul><?php $cat = '1'; wp_get_archives('type=yearly') ?></ul> </div> <div id="news"> <h2>News</h2> <ul><?php $cat = '4'; wp_get_archives('type=yearly') ?></ul> </div> <?php } add_action('thematic_betweenmainasides', 'company_below_sidebar'); function customarchives_join( $x ) { global $wpdb; return $x . " INNER JOIN $wpdb->term_relationships ON ($wpdb->posts.ID = $wpdb- >term_relationships.object_id) INNER JOIN $wpdb->term_taxonomy ON ($wpdb->term_relationships.term_taxonomy_id = $wpdb->term_taxonomy.term_taxonomy_id)"; } function customarchives_where( $x ) { global $wpdb; global $cat; return $x . " AND $wpdb->term_taxonomy.taxonomy = 'category' AND $wpdb->term_taxonomy.term_id IN ($cat)"; } add_filter( 'getarchives_where', 'customarchives_where' ); add_filter( 'getarchives_join', 'customarchives_join' ); function company_monthly_archive_flexible($input) { // Get URL from $input preg_match('(((f|ht){1}tp://)[-a-zA-Z0-9@:%_\+.~#?&;//=]+)', $input, $matches); $url = $matches[0]; // Get content from $input (without the tags) $content = trim(strip_tags($input)); // Seperate each date element and put it in an array $dates = explode(" ", $content); // Get year $year = $dates[0]; // Customize output format $format = "<li><a href='$url'><b>$year -</b> articles</a></li>\n"; echo $format; } add_filter('get_archives_link','company_monthly_archive_flexible'); ?>

    Read the article

  • Why do we need different CPU architecture for server & mini/mainframe & mixed-core?

    - by claws
    Hello, I was just wondering what other CPU architectures are available other than INTEL & AMD. So, found List of CPU architectures on Wikipedia. It categorizes notable CPU architectures into following categories. Embedded CPU architectures Microcomputer CPU architectures Workstation/Server CPU architectures Mini/Mainframe CPU architectures Mixed core CPU architectures I was analyzing the purposes and have few doubts. I taking Microcomputer CPU (PC) architecture as reference and comparing others. Embedded CPU architecture: They are a completely new world. Embedded systems are small & do very specific task mostly real time & low power consuming so we do not need so many & such wide registers available in a microcomputer CPU (typical PC). In other words we do need a new small & tiny architecture. Hence new architecture & new instruction RISC. The above point also clarifies why do we need a separate operating system (RTOS). Workstation/Server CPU architectures I don't know what is a workstation. Someone clarify regarding the workstation. As of the server. It is dedicated to run a specific software (server software like httpd, mysql etc.). Even if other processes run we need to give server process priority therefore there is a need for new scheduling scheme and thus we need operating system different than general purpose one. If you have any more points for the need of server OS please mention. But I don't get why do we need a new CPU Architecture. Why cant Microcomputer CPU architecture do the job. Can someone please clarify? Mini/Mainframe CPU architectures Again I don't know what are these & what miniframes or mainframes used for? I just know they are very big and occupy complete floor. But I never read about some real world problems they are trying to solve. If any one working on one of these. Share your knowledge. Can some one clarify its purpose & why is it that microcomputer CPU archicture not suitable for it? Is there a new kind of operating system for this too? Why? Mixed core CPU architectures Never heard of these. If possible please keep your answer in this format: XYZ CPU architectures Purpose of XYZ Need for a new architecture. why can't current microcomputer CPU architecture work? They go upto 3GHZ & have upto 8 cores. Need for a new Operating System Why do we need a new kind of operating system for this kind of archictures?

    Read the article

  • Common Mercator Projection formulas for Google Maps not working correctly

    - by Tom Halladay
    I am building a Tile Overlay server for Google maps in C#, and have found a few different code examples for calculating Y from Latitude. After getting them to work in general, I started to notice certain cases where the overlays were not lining up properly. To test this, I made a test harness to compare Google Map's Mercator LatToY conversion against the formulas I found online. As you can see below, they do not match in certain cases. Case #1 Zoomed Out: The problem is most evident when zoomed out. Up close, the problem is barely visible. Case #2 Point Proximity to Top & Bottom of viewing bounds: The problem is worse in the middle of the viewing bounds, and gets better towards the edges. This behavior can negate the behavior of Case #1 The Test: I created a google maps page to display red lines using the Google Map API's built in Mercator conversion, and overlay this with an image using the reference code for doing Mercator conversion. These conversions are represented as black lines. Compare the difference. The Results: Check out the top-most and bottom-most lines: The problem gets visually larger but numerically smaller as you zoom in: And it all but disappears at closer zoom levels, regardless of screen orientation. The Code: Google Maps Client Side Code: var lat = 0; for (lat = -80; lat <= 80; lat += 5) { map.addOverlay(new GPolyline([new GLatLng(lat, -180), new GLatLng(lat, 0)], "#FF0033", 2)); map.addOverlay(new GPolyline([new GLatLng(lat, 0), new GLatLng(lat, 180)], "#FF0033", 2)); } Server Side Code: Tile Cutter : http://mapki.com/wiki/Tile_Cutter OpenStreetMap Wiki : http://wiki.openstreetmap.org/wiki/Mercator protected override void ImageOverlay_ComposeImage(ref Bitmap ZipCodeBitMap) { Graphics LinesGraphic = Graphics.FromImage(ZipCodeBitMap); Int32 MapWidth = Convert.ToInt32(Math.Pow(2, zoom) * 255); Point Offset = Cartographer.Mercator2.toZoomedPixelCoords(North, West, zoom); TrimPoint(ref Offset, MapWidth); for (Double lat = -80; lat <= 80; lat += 5) { Point StartPoint = Cartographer.Mercator2.toZoomedPixelCoords(lat, -179, zoom); Point EndPoint = Cartographer.Mercator2.toZoomedPixelCoords(lat, -1, zoom); TrimPoint(ref StartPoint, MapWidth); TrimPoint(ref EndPoint, MapWidth); StartPoint.X = StartPoint.X - Offset.X; EndPoint.X = EndPoint.X - Offset.X; StartPoint.Y = StartPoint.Y - Offset.Y; EndPoint.Y = EndPoint.Y - Offset.Y; LinesGraphic.DrawLine(new Pen(Color.Black, 2), StartPoint.X, StartPoint.Y, EndPoint.X, EndPoint.Y); LinesGraphic.DrawString( lat.ToString(), new Font("Verdana", 10), new SolidBrush(Color.Black), new Point( Convert.ToInt32((width / 3.0) * 2.0), StartPoint.Y)); } } protected void TrimPoint(ref Point point, Int32 MapWidth) { point.X = Math.Max(point.X, 0); point.X = Math.Min(point.X, MapWidth - 1); point.Y = Math.Max(point.Y, 0); point.Y = Math.Min(point.Y, MapWidth - 1); } So, Anyone ever experienced this? Dare I ask, resolved this? Or simply have a better C# implementation of Mercator Project coordinate conversion? Thanks!

    Read the article

  • Cross Browser Issue

    - by dazedandconfused
    My background is in WinForms programming and I'm trying to branch out a bit. I'm finding cross-browser issues a frustrating barrier in general, but have a specific one that I just can't seem to work through. I want to display an image and place a semi-transparent bar across the top and bottom. This isn't my ultimate goal, of course, but it demonstrates the problem I'm having ina a relatively short code fragment so let's go with it. The sample code below displays as intended in Chrome, Safari, and Firefox. In IE8, the bar at the bottom doesn't appear at all. I've researched it for hours but just can't seem to come up with the solution. I'm sure this is some dumb rookie mistake, but gotta start somewhere. Code snippet... <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title></title> <script type="text/javascript" language="javascript"> </script> <style type="text/css"> .workarea { position: relative; border: 1px solid black; background-color: #ccc; overflow: hidden; cursor: move; -moz-user-focus: normal; -moz-user-select: none; unselectable: on; } .semitransparent { filter: alpha(opacity=70); -moz-opacity: 0.7; -khtml-opacity: 0.7; opacity: 0.7; background-color: Gray; } </style> </head> <body style="width: 800px; height: 600px;"> <div id="workArea" class="workarea" style="width: 800px; height: 350px; left: 100px; top: 50px; background-color: White; border: 1px solid black;"> <img alt="" src="images/TestImage.jpg" style="left: 0px; top: 0px; border: none; z-index: 1;" /> <div id="topBar" class="semitransparent" style="position: absolute;width: 800px; height: 75px; left: 0px; top: 0px; min-height: 75px; border: none; z-index: 2;" /> <div id="bottomBar" class="semitransparent" style="position: absolute; width: 800px; height: 75px; left: 0px; top: 275px; min-height: 75px; border: none; z-index: 2;" /> </div> </body> </html>

    Read the article

  • Incremental PCA

    - by smichak
    Hi, Lately, I've been looking into an implementation of an incremental PCA algorithm in python - I couldn't find something that would meet my needs so I did some reading and implemented an algorithm I found in some paper. Here is the module's code - the relevant paper on which it is based is mentioned in the module's documentation. I would appreciate any feedback from people who are interested in this. Micha #!/usr/bin/env python """ Incremental PCA calculation module. Based on P.Hall, D. Marshall and R. Martin "Incremental Eigenalysis for Classification" which appeared in British Machine Vision Conference, volume 1, pages 286-295, September 1998. Principal components are updated sequentially as new observations are introduced. Each new observation (x) is projected on the eigenspace spanned by the current principal components (U) and the residual vector (r = x - U(U.T*x)) is used as a new principal component (U' = [U r]). The new principal components are then rotated by a rotation matrix (R) whose columns are the eigenvectors of the transformed covariance matrix (D=U'.T*C*U) to yield p + 1 principal components. From those, only the first p are selected. """ __author__ = "Micha Kalfon" import numpy as np _ZERO_THRESHOLD = 1e-9 # Everything below this is zero class IPCA(object): """Incremental PCA calculation object. General Parameters: m - Number of variables per observation n - Number of observations p - Dimension to which the data should be reduced """ def __init__(self, m, p): """Creates an incremental PCA object for m-dimensional observations in order to reduce them to a p-dimensional subspace. @param m: Number of variables per observation. @param p: Number of principle components. @return: An IPCA object. """ self._m = float(m) self._n = 0.0 self._p = float(p) self._mean = np.matrix(np.zeros((m , 1), dtype=np.float64)) self._covariance = np.matrix(np.zeros((m, m), dtype=np.float64)) self._eigenvectors = np.matrix(np.zeros((m, p), dtype=np.float64)) self._eigenvalues = np.matrix(np.zeros((1, p), dtype=np.float64)) def update(self, x): """Updates with a new observation vector x. @param x: Next observation as a column vector (m x 1). """ m = self._m n = self._n p = self._p mean = self._mean C = self._covariance U = self._eigenvectors E = self._eigenvalues if type(x) is not np.matrix or x.shape != (m, 1): raise TypeError('Input is not a matrix (%d, 1)' % int(m)) # Update covariance matrix and mean vector and centralize input around # new mean oldmean = mean mean = (n*mean + x) / (n + 1.0) C = (n*C + x*x.T + n*oldmean*oldmean.T - (n+1)*mean*mean.T) / (n + 1.0) x -= mean # Project new input on current p-dimensional subspace and calculate # the normalized residual vector g = U.T*x r = x - (U*g) r = (r / np.linalg.norm(r)) if not _is_zero(r) else np.zeros_like(r) # Extend the transformation matrix with the residual vector and find # the rotation matrix by solving the eigenproblem DR=RE U = np.concatenate((U, r), 1) D = U.T*C*U (E, R) = np.linalg.eigh(D) # Sort eigenvalues and eigenvectors from largest to smallest to get the # rotation matrix R sorter = list(reversed(E.argsort(0))) E = E[sorter] R = R[:,sorter] # Apply the rotation matrix U = U*R # Select only p largest eigenvectors and values and update state self._n += 1.0 self._mean = mean self._covariance = C self._eigenvectors = U[:, 0:p] self._eigenvalues = E[0:p] @property def components(self): """Returns a matrix with the current principal components as columns. """ return self._eigenvectors @property def variances(self): """Returns a list with the appropriate variance along each principal component. """ return self._eigenvalues def _is_zero(x): """Return a boolean indicating whether the given vector is a zero vector up to a threshold. """ return np.fabs(x).min() < _ZERO_THRESHOLD if __name__ == '__main__': import sys def pca_svd(X): X = X - X.mean(0).repeat(X.shape[0], 0) [_, _, V] = np.linalg.svd(X) return V N = 1000 obs = np.matrix([np.random.normal(size=10) for _ in xrange(N)]) V = pca_svd(obs) print V[0:2] pca = IPCA(obs.shape[1], 2) for i in xrange(obs.shape[0]): x = obs[i,:].transpose() pca.update(x) U = pca.components print U

    Read the article

  • Minimum-Waste Print Job Grouping Algorithm?

    - by Matt Mc
    I work at a publishing house and I am setting up one of our presses for "ganging", in other words, printing multiple jobs simultaneously. Given that different print jobs can have different quantities, and anywhere from 1 to 20 jobs might need to be considered at a time, the problem would be to determine which jobs to group together to minimize waste (waste coming from over-printing on smaller-quantity jobs in a given set, that is). Given the following stable data: All jobs are equal in terms of spatial size--placement on paper doesn't come into consideration. There are three "lanes", meaning that three jobs can be printed simultaneously. Ideally, each lane has one job. Part of the problem is minimizing how many lanes each job is run on. If necessary, one job could be run on two lanes, with a second job on the third lane. The "grouping" waste from a given set of jobs (let's say the quantities of them are x, y and z) would be the highest number minus the two lower numbers. So if x is the higher number, the grouping waste would be (x - y) + (x - z). Otherwise stated, waste is produced by printing job Y and Z (in excess of their quantities) up to the quantity of X. The grouping waste would be a qualifier for the given set, meaning it could not exceed a certain quantity or the job would simply be printed alone. So the question is stated: how to determine which sets of jobs are grouped together, out of any given number of jobs, based on the qualifiers of 1) Three similar quantities OR 2) Two quantities where one is approximately double the other, AND with the aim of minimal total grouping waste across the various sets. (Edit) Quantity Information: Typical job quantities can be from 150 to 350 on foreign languages, or 500 to 1000 on English print runs. This data can be used to set up some scenarios for an algorithm. For example, let's say you had 5 jobs: 1000, 500, 500, 450, 250 By looking at it, I can see a couple of answers. Obviously (1000/500/500) is not efficient as you'll have a grouping waste of 1000. (500/500/450) is better as you'll have a waste of 50, but then you run (1000) and (250) alone. But you could also run (1000/500) with 1000 on two lanes, (500/250) with 500 on two lanes and then (450) alone. In terms of trade-offs for lane minimization vs. wastage, we could say that any grouping waste over 200 is excessive. (End Edit) ...Needless to say, quite a problem. (For me.) I am a moderately skilled programmer but I do not have much familiarity with algorithms and I am not fully studied in the mathematics of the area. I'm I/P writing a sort of brute-force program that simply tries all options, neglecting any option tree that seems to have excessive grouping waste. However, I can't help but hope there's an easier and more efficient method. I've looked at various websites trying to find out more about algorithms in general and have been slogging my way through the symbology, but it's slow going. Unfortunately, Wikipedia's articles on the subject are very cross-dependent and it's difficult to find an "in". The only thing I've been able to really find would seem to be a definition of the rough type of algorithm I need: "Exclusive Distance Clustering", one-dimensionally speaking. I did look at what seems to be the popularly referred-to algorithm on this site, the Bin Packing one, but I was unable to see exactly how it would work with my problem. Any help is appreciated. :)

    Read the article

  • Wordpress Iphone App: nsxmlparsererrordomain error 64

    - by iMayne
    Spending hours fixing my function.php page and all seemed well, except Im getting errors on my rss feed at: blog.xternalit.co.uk. I leave that issue to rest and installed wordpress iphone app. The connection to my blog works perfect with wordpress' default theme. Ha! But it keeps making me resetting my password. So hell with that. Switched back to my theme and I cant connect from app to blog. The internet say there's n issue with my function.php. here's a look again at the script: '', 'after_widget' = '', 'before_title' = '', 'after_title' = '', )); function content($num) { $theContent = get_the_content(); $output = preg_replace('/]+./','', $theContent); $limit = $num+1; $content = explode(' ', $output, $limit); array_pop($content); $content = implode(" ",$content)."..."; echo $content; } function post_is_in_descendant_category( $cats, $_post = null ) { foreach ( (array) $cats as $cat ) { // get_term_children() accepts integer ID only $descendants = get_term_children( (int) $cat, 'category'); if ( $descendants && in_category( $descendants, $_post ) ) return true; } return false; } //custom comments function mytheme_comment($comment, $args, $depth) { $GLOBALS['comment'] = $comment; ? id="li-comment-" " %s says:'), get_comment_author_link()) ? comment_approved == '0') : ? $depth, 'max_depth' = $args['max_depth']))) ? </div> 0 && isset($_POST['rockwell_settings']) ) { $options = array ( 'style','logo_img','logo_alt','logo_txt', 'logo_tagline', 'tagline_width', 'contact_email','ads', 'advertise_page', 'twitter_link', 'facebook_link', 'flickr', 'about_tit', 'about_txt', 'analytics'); foreach ( $options as $opt ) { delete_option ( 'rockwell_'.$opt, $_POST[$opt] ); add_option ( 'rockwell_'.$opt, $_POST[$opt] ); } } add_theme_page(__('Rockwell Options'), __('Rockwell Options'), 'edit_themes', basename(__FILE__), 'rockwell_settings'); } function rockwell_settings () {? Rockwell Options Panel General Settings Theme Color Scheme selected="selected"pink.css selected="selected"blue.css selected="selected"orange.css Logo image (full path to image) " class="regular-text" / Logo image ALT text " class="regular-text" / Text logo " class="regular-text" / Leave this empty if you entered an image as logo Logo Tag Line " class="regular-text" / Tag Line Box Width (px)Default width: 300px " class="regular-text" / Email Address for Contact Form " class="regular-text" / Twitter link " class="regular-text" / Facebook link " class="regular-text" / Flickr Photostream selected="selected"Yes selected="selected"No Make sure you have FlickrRSS plugin activated if you choose to enable Flickr Photostream Sidebar About Box Title " class="regular-text" / Text Ads Box Settings Ads Section Enabled: selected="selected"Yes selected="selected"No Make sure you have AdMinister plugin activated and have the position "Sidebar" created within the plugin. Advertise Page Google Analytics code: \'"[\'"].*/i', $post-post_content, $matches); $first_img = $matches [1] [0]; if(empty($first_img)){ //Defines a default image $first_img = "/images/default.jpg"; } return $first_img; } ? Please note, if I add: to the bottome of the page, the blog crashes. Without: I get tons of errors. Any help?

    Read the article

  • Writing a code example

    - by Stefano Borini
    I would like to have your feedback regarding code examples. One of the most frustrating experiences I sometimes have when learning a new technology is finding useless examples. I think an example as the most precious thing that comes with a new library, language, or technology. It must be a starting point, a wise and unadulterated explanation on how to achieve a given result. A perfect example must have the following characteristics: Self contained: it should be small enough to be compiled or executed as a single program, without dependencies or complex makefiles. An example is also a strong functional test if you correctly installed the new technology. The more issues could arise, the more likely is that something goes wrong, and the more difficult is to debug and solve the situation. Pertinent: it should demonstrate one, and only one, specific feature of your software/library, involving the minimal additional behavior from external libraries. Helpful: the code should bring you forward, step by step, using comments or self-documenting code. Extensible: the example code should be a small “framework” or blueprint for additional tinkering. A learner can start by adding features to this blueprint. Recyclable: it should be possible to extract parts of the example to use in your own code Easy: An example code is not the place to show your code-fu skillz. Keep it easy. helpful acronym: SPHERE. Prototypical examples of violations of those rules are the following: Violation of self-containedness: an example spanning multiple files without any real need for it. If your example is a python program, keep everything into a single module file. Don’t sub-modularize it. In Java, try to keep everything into a single class, unless you really must partition some entity into a meaningful object you need to pass around (and java mandates one class per file, if I remember correctly). Violation of Pertinency: When showing how many different shapes you can draw, adding radio buttons and complex controls with all the possible choices for point shapes is a bad idea. You de-focalize your example code, introducing code for event handling, controls initialization etc., and this is not part the feature you want to demonstrate, they are unnecessary noise in the understanding of the crucial mechanisms providing the feature. Violation of Helpfulness: code containing dubious naming, wrong comments, hacks, and functions longer than one page of code. Violation of Extensibility: badly factored code that have everything into a single function, with potentially swappable entities embedded within the code. Example: if an example reads data from a file and displays it, create a method getData() returning a useful entity, instead of opening the file raw and plotting the stuff. This way, if the user of the library needs to read data from a HTTP server instead, he just has to modify the getData() module and use the example almost as-is. Another violation of Extensibility comes if the example code is not under a fully liberal (e.g. MIT or BSD) license. Violation of Recyclability: when the code layout is so intermingled that is difficult to easily copy and paste parts of it and recycle them into another program. Again, licensing is also a factor. Violation of Easiness: Yes, you are a functional-programming nerd and want to show how cool you are by doing everything on a single line of map, filter and so on, but that could not be helpful to someone else, who is already under pressure to understand your library, and now has to understand your code as well. And in general, the final rule: if it takes more than 10 minutes to do the following: compile the code, run it, read the source, and understand it fully, it means that the example is not a good one. Please let me know your opinion, either positive or negative, or experience on this regard.

    Read the article

  • Representing game states in Tic Tac Toe

    - by dacman
    The goal of the assignment that I'm currently working on for my Data Structures class is to create a of Quantum Tic Tac Toe with an AI that plays to win. Currently, I'm having a bit of trouble finding the most efficient way to represent states. Overview of current Structure: AbstractGame Has and manages AbstractPlayers (game.nextPlayer() returns next player by int ID) Has and intializes AbstractBoard at the beginning of the game Has a GameTree (Complete if called in initialization, incomplete otherwise) AbstractBoard Has a State, a Dimension, and a Parent Game Is a mediator between Player and State, (Translates States from collections of rows to a Point representation Is a StateConsumer AbstractPlayer Is a State Producer Has a ConcreteEvaluationStrategy to evaluate the current board StateTransveralPool Precomputes possible transversals of "3-states". Stores them in a HashMap, where the Set contains nextStates for a given "3-state" State Contains 3 Sets -- a Set of X-Moves, O-Moves, and the Board Each Integer in the set is a Row. These Integer values can be used to get the next row-state from the StateTransversalPool SO, the principle is Each row can be represented by the binary numbers 000-111, where 0 implies an open space and 1 implies a closed space. So, for an incomplete TTT board: From the Set<Integer> board perspective: X_X R1 might be: 101 OO_ R2 might be: 110 X_X R3 might be: 101, where 1 is an open space, and 0 is a closed space From the Set<Integer> xMoves perspective: X_X R1 might be: 101 OO_ R2 might be: 000 X_X R3 might be: 101, where 1 is an X and 0 is not From the Set<Integer> oMoves perspective: X_X R1 might be: 000 OO_ R2 might be: 110 X_X R3 might be: 000, where 1 is an O and 0 is not Then we see that x{R1,R2,R3} & o{R1,R2,R3} = board{R1,R2,R3} The problem is quickly generating next states for the GameTree. If I have player Max (x) with board{R1,R2,R3}, then getting the next row-states for R1, R2, and R3 is simple.. Set<Integer> R1nextStates = StateTransversalPool.get(R1); The problem is that I have to combine each one of those states with R1 and R2. Is there a better data structure besides Set that I could use? Is there a more efficient approach in general? I've also found Point<-State mediation cumbersome. Is there another approach that I could try there? Thanks! Here is the code for my ConcretePlayer class. It might help explain how players produce new states via moves, using the StateProducer (which might need to become StateFactory or StateBuilder). public class ConcretePlayerGeneric extends AbstractPlayer { @Override public BinaryState makeMove() { // Given a move and the current state, produce a new state Point playerMove = super.strategy.evaluate(this); BinaryState currentState = super.getInGame().getBoard().getState(); return StateProducer.getState(this, playerMove, currentState); } } EDIT: I'm starting with normal TTT and moving to Quantum TTT. Given the framework, it should be as simple as creating several new Concrete classes and tweaking some things.

    Read the article

  • Does this inheritance design belong in the database?

    - by Berryl
    === CLARIFICATION ==== The 'answers' older than March are not answers to the question in this post! Hello In my domain I need to track allocations of time spent on Activities by resources. There are two general types of Activities of interest - ones base on a Project and ones based on an Account. The notion of Project and Account have other features totally unrelated to both each other and capturing allocations of time, and each is modeled as a table in the database. For a given Allocation of time however, it makes sense to not care whether the allocation was made to either a Project or an Account, so an ActivityBase class abstracts away the difference. An ActivityBase is either a ProjectActivity or an AccountingActivity (object model is below). Back to the database though, there is no direct value in having tables for ProjectActivity and AccountingActivity. BUT the Allocation table needs to store something in the column for it's ActivityBase. Should that something be the Id of the Project / Account or a reference to tables for ProjectActivity / Accounting? How would the mapping look? === Current Db Mapping (Fluent) ==== Below is how the mapping currently looks: public class ActivityBaseMap : IAutoMappingOverride<ActivityBase> { public void Override(AutoMapping<ActivityBase> mapping) { //mapping.IgnoreProperty(x => x.BusinessId); //mapping.IgnoreProperty(x => x.Description); //mapping.IgnoreProperty(x => x.TotalTime); mapping.IgnoreProperty(x => x.UniqueId); } } public class AccountingActivityMap : SubclassMap<AccountingActivity> { public void Override(AutoMapping<AccountingActivity> mapping) { mapping.References(x => x.Account); } } public class ProjectActivityMap : SubclassMap<ProjectActivity> { public void Override(AutoMapping<ProjectActivity> mapping) { mapping.References(x => x.Project); } } There are two odd smells here. Firstly, the inheritance chain adds nothing in the way of properties - it simply adapts Projects and Accounts into a common interface so that either can be used in an Allocation. Secondly, the properties in the ActivityBase interface are redundant to keep in the db, since that information is available in Projects and Accounts. Cheers, Berryl ==== Domain ===== public class Allocation : Entity { ... public virtual ActivityBase Activity { get; private set; } ... } public abstract class ActivityBase : Entity { public virtual string BusinessId { get; protected set; } public virtual string Description { get; protected set; } public virtual ICollection<Allocation> Allocations { get { return _allocations.Values; } } public virtual TimeQuantity TotalTime { get { return TimeQuantity.Hours(Allocations.Sum(x => x.TimeSpent.Amount)); } } } public class ProjectActivity : ActivityBase { public virtual Project Project { get; private set; } public ProjectActivity(Project project) { BusinessId = project.Code.ToString(); Description = project.Description; Project = project; } }

    Read the article

  • Problem creating a database with PHP PDO

    - by Leandro Alonso
    Hello guys, I'm having a problem with a SQL query in my PHP Application. When the user access it for the first time, the app executes this query to create all the database: CREATE TABLE `databases` ( `id` bigint(20) NOT NULL auto_increment, `driver` varchar(45) NOT NULL, `server` text NOT NULL, `user` text NOT NULL, `password` text NOT NULL, `database` varchar(200) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=2 ; -- -------------------------------------------------------- -- -- Table structure for table `modules` -- CREATE TABLE `modules` ( `id` bigint(20) unsigned NOT NULL auto_increment, `title` varchar(100) NOT NULL, `type` varchar(150) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=29 ; -- -------------------------------------------------------- -- -- Table structure for table `modules_data` -- CREATE TABLE `modules_data` ( `id` bigint(20) NOT NULL auto_increment, `module_id` bigint(20) unsigned NOT NULL, `key` varchar(150) NOT NULL, `value` tinytext, PRIMARY KEY (`id`), KEY `fk_modules_data_modules` (`module_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=184 ; -- -------------------------------------------------------- -- -- Table structure for table `modules_position` -- CREATE TABLE `modules_position` ( `user_id` bigint(20) unsigned NOT NULL, `tab_id` bigint(20) unsigned NOT NULL, `module_id` bigint(20) unsigned NOT NULL, `column` smallint(1) default NULL, `line` smallint(1) default NULL, PRIMARY KEY (`user_id`,`tab_id`,`module_id`), KEY `fk_modules_order_users` (`user_id`), KEY `fk_modules_order_tabs` (`tab_id`), KEY `fk_modules_order_modules` (`module_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -------------------------------------------------------- -- -- Table structure for table `tabs` -- CREATE TABLE `tabs` ( `id` bigint(20) unsigned NOT NULL auto_increment, `title` varchar(60) NOT NULL, `columns` smallint(1) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=12 ; -- -------------------------------------------------------- -- -- Table structure for table `tabs_has_modules` -- CREATE TABLE `tabs_has_modules` ( `tab_id` bigint(20) unsigned NOT NULL, `module_id` bigint(20) unsigned NOT NULL, PRIMARY KEY (`tab_id`,`module_id`), KEY `fk_tabs_has_modules_tabs` (`tab_id`), KEY `fk_tabs_has_modules_modules` (`module_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -------------------------------------------------------- -- -- Table structure for table `users` -- CREATE TABLE `users` ( `id` bigint(20) unsigned NOT NULL auto_increment, `login` varchar(60) NOT NULL, `password` varchar(64) NOT NULL, `email` varchar(100) NOT NULL, `name` varchar(250) default NULL, `user_level` bigint(20) unsigned NOT NULL, PRIMARY KEY (`id`), KEY `fk_users_user_levels` (`user_level`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=4 ; -- -------------------------------------------------------- -- -- Table structure for table `users_has_tabs` -- CREATE TABLE `users_has_tabs` ( `user_id` bigint(20) unsigned NOT NULL, `tab_id` bigint(20) unsigned NOT NULL, `order` smallint(2) NOT NULL, `columns_width` varchar(255) default NULL, PRIMARY KEY (`user_id`,`tab_id`), KEY `fk_users_has_tabs_users` (`user_id`), KEY `fk_users_has_tabs_tabs` (`tab_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -------------------------------------------------------- -- -- Table structure for table `user_levels` -- CREATE TABLE `user_levels` ( `id` bigint(20) unsigned NOT NULL auto_increment, `level` smallint(2) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=3 ; -- -------------------------------------------------------- -- -- Table structure for table `user_meta` -- CREATE TABLE `user_meta` ( `id` bigint(20) unsigned NOT NULL auto_increment, `user_id` bigint(20) unsigned default NULL, `key` varchar(150) NOT NULL, `value` longtext NOT NULL, PRIMARY KEY (`id`), KEY `fk_user_meta_users` (`user_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=4 ; -- -- Constraints for dumped tables -- -- -- Constraints for table `modules_data` -- ALTER TABLE `modules_data` ADD CONSTRAINT `fk_modules_data_modules` FOREIGN KEY (`module_id`) REFERENCES `modules` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION; -- -- Constraints for table `modules_position` -- ALTER TABLE `modules_position` ADD CONSTRAINT `fk_modules_order_modules` FOREIGN KEY (`module_id`) REFERENCES `modules` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION, ADD CONSTRAINT `fk_modules_order_tabs` FOREIGN KEY (`tab_id`) REFERENCES `tabs` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION, ADD CONSTRAINT `fk_modules_order_users` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION; -- -- Constraints for table `users` -- ALTER TABLE `users` ADD CONSTRAINT `fk_users_user_levels` FOREIGN KEY (`user_level`) REFERENCES `user_levels` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION; -- -- Constraints for table `user_meta` -- ALTER TABLE `user_meta` ADD CONSTRAINT `fk_user_meta_users` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION; INSERT INTO `user_levels` VALUES(1, 10); INSERT INTO `user_levels` VALUES(2, 1); INSERT INTO `users` VALUES(1, 'admin', 'password', '[email protected]', NULL, 1); INSERT INTO `user_meta` VALUES (NULL, 1, 'last_tab', 1); In some environments i get this error: SQLSTATE[HY000]: General error: 1005 Can't create table 'dms.databases' (errno: 150) I tried everything that I could find on Google but nothing works. The strange part is that if I run this query in PhpMyAdmin he creates my database, without any error.

    Read the article

  • Paging, sorting and filtering in a stored procedure (SQL Server)

    - by Fruitbat
    I was looking at different ways of writing a stored procedure to return a "page" of data. This was for use with the asp ObjectDataSource, but it could be considered a more general problem. The requirement is to return a subset of the data based on the usual paging paremeters, startPageIndex and maximumRows, but also a sortBy parameter to allow the data to be sorted. Also there are some parameters passed in to filter the data on various conditions. One common way to do this seems to be something like this: [Method 1] ;WITH stuff AS ( SELECT CASE WHEN @SortBy = 'Name' THEN ROW_NUMBER() OVER (ORDER BY Name) WHEN @SortBy = 'Name DESC' THEN ROW_NUMBER() OVER (ORDER BY Name DESC) WHEN @SortBy = ... ELSE ROW_NUMBER() OVER (ORDER BY whatever) END AS Row, ., ., ., FROM Table1 INNER JOIN Table2 ... LEFT JOIN Table3 ... WHERE ... (lots of things to check) ) SELECT * FROM stuff WHERE (Row > @startRowIndex) AND (Row <= @startRowIndex + @maximumRows OR @maximumRows <= 0) ORDER BY Row One problem with this is that it doesn't give the total count and generally we need another stored procedure for that. This second stored procedure has to replicate the parameter list and the complex WHERE clause. Not nice. One solution is to append an extra column to the final select list, (SELECT COUNT(*) FROM stuff) AS TotalRows. This gives us the total but repeats it for every row in the result set, which is not ideal. [Method 2] An interesting alternative is given here (http://www.4guysfromrolla.com/articles/032206-1.aspx) using dynamic SQL. He reckons that the performance is better because the CASE statement in the first solution drags things down. Fair enough, and this solution makes it easy to get the totalRows and slap it into an output parameter. But I hate coding dynamic SQL. All that 'bit of SQL ' + STR(@parm1) +' bit more SQL' gubbins. [Method 3] The only way I can find to get what I want, without repeating code which would have to be synchronised, and keeping things reasonably readable is to go back to the "old way" of using a table variable: DECLARE @stuff TABLE (Row INT, ...) INSERT INTO @stuff SELECT CASE WHEN @SortBy = 'Name' THEN ROW_NUMBER() OVER (ORDER BY Name) WHEN @SortBy = 'Name DESC' THEN ROW_NUMBER() OVER (ORDER BY Name DESC) WHEN @SortBy = ... ELSE ROW_NUMBER() OVER (ORDER BY whatever) END AS Row, ., ., ., FROM Table1 INNER JOIN Table2 ... LEFT JOIN Table3 ... WHERE ... (lots of things to check) SELECT * FROM stuff WHERE (Row > @startRowIndex) AND (Row <= @startRowIndex + @maximumRows OR @maximumRows <= 0) ORDER BY Row (Or a similar method using an IDENTITY column on the table variable). Here I can just add a SELECT COUNT on the table variable to get the totalRows and put it into an output parameter. I did some tests and with a fairly simple version of the query (no sortBy and no filter), method 1 seems to come up on top (almost twice as quick as the other 2). Then I decided to test probably I needed the complexity and I needed the SQL to be in stored procedures. With this I get method 1 taking nearly twice as long as the other 2 methods. Which seems strange. Is there any good reason why I shouldn't spurn CTEs and stick with method 3? UPDATE - 15 March 2012 I tried adapting Method 1 to dump the page from the CTE into a temporary table so that I could extract the TotalRows and then select just the relevant columns for the resultset. This seemed to add significantly to the time (more than I expected). I should add that I'm running this on a laptop with SQL Server Express 2008 (all that I have available) but still the comparison should be valid. I looked again at the dynamic SQL method. It turns out I wasn't really doing it properly (just concatenating strings together). I set it up as in the documentation for sp_executesql (with a parameter description string and parameter list) and it's much more readable. Also this method runs fastest in my environment. Why that should be still baffles me, but I guess the answer is hinted at in Hogan's comment.

    Read the article

  • MIME "Content-Type" folding and parameter question regarding RFCs?

    - by BastiBense
    Hello, I'm trying to implement a basic MIME parser for the multipart/related in C++/Qt. So far I've been writing some basic parser code for headers, and I'm reading the RFCs to get an idea how to do everything as close to the specification as possible. Unfortunately there is a part in the RFC that confuses me a bit: From RFC882 Section 3.1.1: Each header field can be viewed as a single, logical line of ASCII characters, comprising a field-name and a field-body. For convenience, the field-body portion of this conceptual entity can be split into a multiple-line representation; this is called "folding". The general rule is that wherever there may be linear-white-space (NOT simply LWSP-chars), a CRLF immediately followed by AT LEAST one LWSP-char may instead be inserted. Thus, the single line Alright, so I simply parse a header field and if a CRLF follows with linear whitespace, I simply concat those in a useful manner to result in a single header line. Let's proceed... From RFC2045 Section 5.1: In the Augmented BNF notation of RFC 822, a Content-Type header field value is defined as follows: content := "Content-Type" ":" type "/" subtype *(";" parameter) ; Matching of media type and subtype ; is ALWAYS case-insensitive. [...] parameter := attribute "=" value attribute := token ; Matching of attributes ; is ALWAYS case-insensitive. value := token / quoted-string token := 1*<any (US-ASCII) CHAR except SPACE, CTLs, or tspecials> Okay. So it seems if you want to specify a Content-Type header with parameters, simple do it like this: Content-Type: multipart/related; foo=bar; something=else ... and a folded version of the same header would look like this: Content-Type: multipart/related; foo=bar; something=else Correct? Good. As I kept reading the RFCs, I came across the following in RFC2387 Section 5.1 (Examples): Content-Type: Multipart/Related; boundary=example-1 start="<[email protected]>"; type="Application/X-FixedRecord" start-info="-o ps" --example-1 Content-Type: Application/X-FixedRecord Content-ID: <[email protected]> [data] --example-1 Content-Type: Application/octet-stream Content-Description: The fixed length records Content-Transfer-Encoding: base64 Content-ID: <[email protected]> [data] --example-1-- Hmm, this is odd. Do you see the Content-Type header? It has a number of parameters, but not all have a ";" as parameter delimiter. Maybe I just didn't read the RFCs correctly, but if my parser works strictly like the specification defines, the type and start-info parameters would result in a single string or worse, a parser error. Guys, what's your thought on this? Just a typo in the RFCs? Or did I miss something? Thanks!

    Read the article

  • Help with 2-part question on ASP.NET MVC and Custom Security Design

    - by JustAProgrammer
    I'm using ASP.NET MVC and I am trying to separate a lot of my logic. Eventually, this application will be pretty big. It's basically a SaaS app that I need to allow for different kinds of clients to access. I have a two part question; the first deals with my general design and the second deals with how to utilize in ASP.NET MVC Primarily, there will initially be an ASP.NET MVC "client" front-end and there will be a set of web-services for third parties to interact with (perhaps mobile, etc). I realize I could have the ASP.NET MVC app interact just through the Web Service but I think that is unnecessary overhead. So, I am creating an API that will essentially be a DLL that the Web App and the Web Services will utilize. The API consists of the main set of business logic and Data Transfer Objects, etc. (So, this includes methods like CreateCustomer, EditProduct, etc for example) Also, my permissions requirements are a little complicated. I can't really use a straight Roles system as I need to have some fine-grained permissions (but all permissions are positive rights). So, I don't think I can really use the ASP.NET Roles/Membership system or if I can it seems like I'd be doing more work than rolling my own. I've used Membership before and for this one I think I'd rather roll my own. Both the Web App and Web Services will need to keep security as a concern. So, my design is kind of like this: Each method in the API will need to verify the security of the caller In the Web App, each "page" ("action" in MVC speak) will also check the user's permissions (So, don't present the user with the "Add Customer" button if the user does not have that right but also whenever the API receives AddCustomer(), check the security too) I think the Web Service really needs the checking in the DLL because it may not always be used in some kind of pre-authenticated context (like using Session/Cookies in a Web App); also having the security checks in the API means I don't really HAVE TO check it in other places if I'm on a mobile (say iPhone) and don't want to do all kinds of checking on the client However, in the Web App I think there will be some duplication of work since the Web App checks the user's security before presenting the user with options, which is ok, but I was thinking of a way to avoid this duplication by allowing the Web App to tell the API not check the security; while the Web Service would always want security to be verified Is this a good method? If not, what's better? If so, what's a good way of implementing this. I was thinking of doing this: In the API, I would have two functions for each action: // Here, "Credential" objects are just something I made up public void AddCustomer(string customerName, Credential credential , bool checkSecurity) { if(checkSecurity) { if(Has_Rights_To_Add_Customer(credential)) // made up for clarity { AddCustomer(customerName); } else // throw an exception or somehow present an error } else AddCustomer(customerName); } public void AddCustomer(string customerName) { // actual logic to add the customer into the DB or whatever // Would it be good for this method to verify that the caller is the Web App // through some method? } So, is this a good design or should I do something differently? My next question is that clearly it doesn't seem like I can really use [Authorize ...] for determining if a user has the permissions to do something. In fact, one action might depend on a variety of permissions and the View might hide or show certain options depending on the permission. What's the best way to do this? Should I have some kind of PermissionSet object that the user carries around throughout the Web App in Session or whatever and the MVC Action method would check if that user can use that Action and then the View will have some ViewData or whatever where it checks the various permissions to do Hide/Show?

    Read the article

  • Detecting Acceleration in a car (iPhone Accelerometer)

    - by TheGazzardian
    Hello, I am working on an iPhone app where we are trying to calculate the acceleration of a moving car. Similar apps have accomplished this (Dynolicious), but the difference is that this app is designed to be used during general city driving, not on a drag strip. This leads us to one big concern that Dynolicious was luckily able to avoid: hills. Yes, hills. There are two important stages to this: calibration, and actual driving. Our initial run was simple and suffered the consequences. During the calibration stage, I took the average force on the phone, and during running, I just subtracted the average force from the current force to get the current acceleration this frame. The problem with this is that the typical car receives much more force than just the forward force - everything from turning to potholes was causing the values to go out of sync with what was really happening. The next run was to add the condition that the iPhone must be oriented in such a way that the screen was facing toward the back of the car. Using this method, I attempted to follow only force on the z-axis, but this obviously lead to problems unless the iPhone was oriented directly upright, because of gravity. Some trigonometry later, and I had managed to work gravity out of the equation, so that the car was actually being read very, very well by the iPhone. Until I hit a slope. As soon as the angle of the car changed, suddenly I was receiving accelerations and decelerations that didn't make sense, and we were once again going out of sync. Talking with someone a lot smarter than me at math lead to a solution that I have been trying to implement for longer than I would like to admit. It's steps are as follows: 1) During calibration, measure gravity as a vector instead of a size. Store that vector. 2) When the car initially moves forward, take the vector of motion and subtract gravity. Use this as the forward momentum. (Ignore, for now, the user cases where this will be difficult and let's concentrate on the math :) 3) From the forward vector and the gravity vector, construct a plane. 4) Whenever a force is received, project it onto said plane to get rid of sideways force/etc. 5) Then, use that force, the known magnitude of gravity, and the known direction of forward motion to essentially solve a triangle to get the forward vector. The problem that is causing the most difficulty in this new system is not step 5, which I have gotten to the point where all the numbers look as they should. The difficult part is actually the detection of the forward vector. I am selecting vectors whose magnitude exceeds gravity, and from there, averaging them and subtracting gravity. (I am doing some error checking to make sure that I am not using a force just because the iPhone accelerometer was off by a bit, which happens more frequently than I would like). But if I plot these vectors that I am using, they actually vary by an angle of about 20-30 degrees, which can lead to some strong inaccuracies. The end result is that the app is even more inaccurate now than before. So basically - all you math and iPhone brains out there - any glaring errors? Any potentially better solutions? Any experience that could be useful at all? Award: offering a bounty of $250 to the first answer that leads to a solution.

    Read the article

  • Use QuickCheck by generating primes

    - by Dan
    Background For fun, I'm trying to write a property for quick-check that can test the basic idea behind cryptography with RSA. Choose two distinct primes, p and q. Let N = p*q e is some number relatively prime to (p-1)(q-1) (in practice, e is usually 3 for fast encoding) d is the modular inverse of e modulo (p-1)(q-1) For all x such that 1 < x < N, it is always true that (x^e)^d = x modulo N In other words, x is the "message", raising it to the eth power mod N is the act of "encoding" the message, and raising the encoded message to the dth power mod N is the act of "decoding" it. (The property is also trivially true for x = 1, a case which is its own encryption) Code Here are the methods I have coded up so far: import Test.QuickCheck -- modular exponentiation modExp :: Integral a => a -> a -> a -> a modExp y z n = modExp' (y `mod` n) z `mod` n where modExp' y z | z == 0 = 1 | even z = modExp (y*y) (z `div` 2) n | odd z = (modExp (y*y) (z `div` 2) n) * y -- relatively prime rPrime :: Integral a => a -> a -> Bool rPrime a b = gcd a b == 1 -- multiplicative inverse (modular) mInverse :: Integral a => a -> a -> a mInverse 1 _ = 1 mInverse x y = (n * y + 1) `div` x where n = x - mInverse (y `mod` x) x -- just a quick way to test for primality n `divides` x = x `mod` n == 0 primes = 2:filter isPrime [3..] isPrime x = null . filter (`divides` x) $ takeWhile (\y -> y*y <= x) primes -- the property prop_rsa (p,q,x) = isPrime p && isPrime q && p /= q && x > 1 && x < n && rPrime e t ==> x == (x `powModN` e) `powModN` d where e = 3 n = p*q t = (p-1)*(q-1) d = mInverse e t a `powModN` b = modExp a b n (Thanks, google and random blog, for the implementation of modular multiplicative inverse) Question The problem should be obvious: there are way too many conditions on the property to make it at all usable. Trying to invoke quickCheck prop_rsa in ghci made my terminal hang. So I've poked around the QuickCheck manual a bit, and it says: Properties may take the form forAll <generator> $ \<pattern> -> <property> How do I make a <generator> for prime numbers? Or with the other constraints, so that quickCheck doesn't have to sift through a bunch of failed conditions? Any other general advice (especially regarding QuickCheck) is welcome.

    Read the article

  • I'm looking for an online ASP.NET tutor.

    - by pkiyan
    $15/hr. I know it's not much but... Hi. I'm looking for an ASP.NET tutor. I want to use a remote desktop application so we can see each others screens and use Skype or phone to communicate with. You won't need to come up with any lessons or anything like that. I was thinking we could spend an hour or two each time we logged in to build a decent sized website from scratch. That's basically it. I'm a beginner with about 2 months experience with ASP.NET so we won't have to start from the very beginning, but pretty close. I wanted this site to have a little complexity to it and not just a website for beginners, but something I could study for a while. I'll pay you through PayPal or some other method if you prefer. By the way, it doesn't have to be a website that we work on together. I'll listen to other suggestions too. Maybe we could use an open source site/app to walk-through and study and modify. I've looked at 'My Web Pages Starter Kit 1.30', 'SubText 2.1.2', 'nopCommerce 1.5', and some others. They were all beyond me, and I couldn't make sense of any of the source code. But if you use and are really familiar with an open source app/site that I can download, we could study that. Here are some technical specs about the site I'd like to build/study: ASP.NET 2.0+ (preferably 3.5+, but I don't really care) C# / VB.NET ( don't really care, I suck at both. This is more about ASP.NET and helping me understand the structure of an ASP.NET website and the .NET framework in general. ) SQL Server ( I have SQL Server 2008 express and would someday like to learn how to use this thing. ) JavaScript / AJAX ( at least some use of this ) XML ( basically, I'd like to spend some time in the web.config file, and have some sense of what's going on in there. ) ASP.NET Folders ( I'd like to work with all of the ASP.NET folders if possible: App_Code, App_GlobalResources, etc.. and understand what does/doesn't go in them. Hopefully we can build more than one theme too. ) Assemblies ( how do you create a .dll and use it across different websites? maybe you could suggest a third party .dll that we could use ) Web Service ( I read about this once but didn't really get it ) I can't think of anything else but the above will definitely keep me busy. Hopefully we could make use of a lot of the server controls too (the nav controls gave me a headache when I tried customizing them). Is someone willing to help? I'll pay through PayPal 15 bucks an hour. I live in the Dallas, Texas (US) area so we'd have to synchronize time zones and agree on a day(s)/time of the week. I prefer working at night and on the weekends because I work during the week but whatever your schedule allows too. If you'd like to help me, can you post: years of experience with ASP.NET, your Time zone and time you're available and any ideas you might have about how you'd like to tutor? THANK YOU.

    Read the article

  • Looking for a function that will split profits/loss equally between 2 business partners.

    - by Hamish Grubijan
    This is not homework, for I am not a student. This is for my general curiosity. I apologize if I am reinventing the wheel here.The function I seek can be defined as follows (language agnostic): int getPercentageOfA(double moneyA, double workA, double moneyB, double workB) { // Perhaps you may assume that workA == 0 // Compute result return result; } Suppose Alice and Bob want to do business together ... such as ... selling used books. Alice is only interested in investing money in it and nothing else. Bob might invest some money, but he might have no $ available to invest. He will, however, put in the effort in finding a seller, a buyer, and doing maintenance. There are no tools, education, health insurance costs, or other expenses to consider. Both Alice and Bob wish to split the profits "equally" (A different weight like 40/60 for advanced users). Both are entrepreneurs, so they deal with low ROI/wage, and high income alike. There is no fixed wage, minimum wage, fixed ROI, or minimum ROI. They try to find the best deal possible, assume risks and go for it. Now, let's stick with the 50/50 model. If Alice invests $100, Bob invests work, and they will end up with a profit (or loss) of $60, they will split it equally - either both get $30 for their efforts/investments, or Bob ends up owing $30 to Alice. A second possibility: Both Alice and Bob invest 100, then Bob does all the work, and they end up splitting $60 profit. It looks like Alice should get only $15, because $30 of that profit came from Bob's investment and Bob's effort, so Alice shall have none of it, and the other $30 is to be split 50/50. Both of the examples above are trivial even when A and B want to split it 35/65 or what have you. Now it gets more complicated: What if Alice invests $70, and Bob invests $30 + does all of the work. It appears simple: (70,30) = (30,30) + (40,0) ... but, if only we knew how to weigh the two parts relative to each other. Another complicated (I think) example: what if Alice and Bob invest $70 and $30 respectively, and also put in an equal amount of work? I have a few data points: When A and B put in the same amount of work and the same $ - 50/50. When A puts in 100% of the money, and B does 100% of the work - 50/50. When A does all of the work and puts in all of the money - 100 for A / 0 for B (and vice-versa). When A puts in 50% of the money, and B puts in 50% of the money as well as does all of the work - 25 for A, and 75 for B (and vice-versa). If I fix things such that always workA = 0%, workB = 100% of the total work, then getPercentageOfA becomes a function: height z given x and y. The question is - how do you extrapolate this function between these several points? What is this function? If you can cover the cases when workA does not have to be 0% of the total work, and when investment vs work is split as 85/15 or using some other model, then what would the new function be?

    Read the article

  • MySQL: Creating table with FK error (errno 150)

    - by Peter Bailey
    I've tried searching on this error and nothing I've found helps me, so I apologize in advance if this is a duplicate and I'm just too dumb to find it. I've created a model with MySQL Workbench and am now attempting to install it to a mysql server. Using File Export Forward Engineer SQL CREATE Script... it outputs a nice big file for me, with all the settings I ask for. I switch over to MySQL GUI Tools (the Query Browser specifically) and load up this script (note that I'm going form one official MySQL tool to another). However, when I try to actually execute this file, I get the same error over and over SQLSTATE[HY000]: General error: 1005 Can't create table './srs_dev/location.frm' (errno: 150) "OK", I say to myself, something is wrong with the location table. So I check out the definition in the output file. SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0; SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0; SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='TRADITIONAL'; -- ----------------------------------------------------- -- Table `state` -- ----------------------------------------------------- DROP TABLE IF EXISTS `state` ; CREATE TABLE IF NOT EXISTS `state` ( `state_id` INT NOT NULL AUTO_INCREMENT , `iso_3166_2_code` VARCHAR(2) NOT NULL , `name` VARCHAR(60) NOT NULL , PRIMARY KEY (`state_id`) ) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `brand` -- ----------------------------------------------------- DROP TABLE IF EXISTS `brand` ; CREATE TABLE IF NOT EXISTS `brand` ( `brand_id` INT UNSIGNED NOT NULL AUTO_INCREMENT , `name` VARCHAR(45) NOT NULL , `domain` VARCHAR(45) NOT NULL , `manager_name` VARCHAR(100) NULL , `manager_email` VARCHAR(255) NULL , PRIMARY KEY (`brand_id`) ) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `location` -- ----------------------------------------------------- DROP TABLE IF EXISTS `location` ; CREATE TABLE IF NOT EXISTS `location` ( `location_id` INT UNSIGNED NOT NULL AUTO_INCREMENT , `name` VARCHAR(255) NOT NULL , `address_line_1` VARCHAR(255) NULL , `address_line_2` VARCHAR(255) NULL , `city` VARCHAR(100) NULL , `state_id` TINYINT UNSIGNED NULL DEFAULT NULL , `postal_code` VARCHAR(10) NULL , `phone_number` VARCHAR(20) NULL , `fax_number` VARCHAR(20) NULL , `lat` DECIMAL(9,6) NOT NULL , `lng` DECIMAL(9,6) NOT NULL , `contact_url` VARCHAR(255) NULL , `brand_id` TINYINT UNSIGNED NOT NULL , `summer_hours` VARCHAR(255) NULL , `winter_hours` VARCHAR(255) NULL , `after_hours_emergency` VARCHAR(255) NULL , `image_file_name` VARCHAR(100) NULL , `manager_name` VARCHAR(100) NULL , `manager_email` VARCHAR(255) NULL , `created_date` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP , PRIMARY KEY (`location_id`) , CONSTRAINT `fk_location_state` FOREIGN KEY (`state_id` ) REFERENCES `state` (`state_id` ) ON DELETE NO ACTION ON UPDATE NO ACTION, CONSTRAINT `fk_location_brand` FOREIGN KEY (`brand_id` ) REFERENCES `brand` (`brand_id` ) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB; CREATE INDEX `fk_location_state` ON `location` (`state_id` ASC) ; CREATE INDEX `fk_location_brand` ON `location` (`brand_id` ASC) ; CREATE INDEX `idx_lat` ON `location` (`lat` ASC) ; CREATE INDEX `idx_lng` ON `location` (`lng` ASC) ; Looks ok to me. I surmise that maybe something is wrong with the Query Browser, so I put this file on the server and try to load it this way ] mysql -u admin -p -D dbname < path/to/create_file.sql And I get the same error. So I start to Google this issue and find all kinds of accounts that talk about an error with InnoDB style tables that fail with foreign keys, and the fix is to add "SET FOREIGN_KEY_CHECKS=0;" to the SQL script. Well, as you can see, that's already part of the file that MySQL Workbench spat out. So, my question is then, why is this not working when I'm doing what I think I'm supposed to be doing? Version Info: # MySQL: 5.0.45 GUI Tools: 1.2.17 Workbench: 5.0.30

    Read the article

  • What are good CLI tools for JSON?

    - by jasonmp85
    General Problem Though I may be diagnosing the root cause of an event, determining how many users it affected, or distilling timing logs in order to assess the performance and throughput impact of a recent code change, my tools stay the same: grep, awk, sed, tr, uniq, sort, zcat, tail, head, join, and split. To glue them all together, Unix gives us pipes, and for fancier filtering we have xargs. If these fail me, there's always perl -e. These tools are perfect for processing CSV files, tab-delimited files, log files with a predictable line format, or files with comma-separated key-value pairs. In other words, files where each line has next to no context. XML Analogues I recently needed to trawl through Gigabytes of XML to build a histogram of usage by user. This was easy enough with the tools I had, but for more complicated queries the normal approaches break down. Say I have files with items like this: <foo user="me"> <baz key="zoidberg" value="squid" /> <baz key="leela" value="cyclops" /> <baz key="fry" value="rube" /> </foo> And let's say I want to produce a mapping from user to average number of <baz>s per <foo>. Processing line-by-line is no longer an option: I need to know which user's <foo> I'm currently inspecting so I know whose average to update. Any sort of Unix one liner that accomplishes this task is likely to be inscrutable. Fortunately in XML-land, we have wonderful technologies like XPath, XQuery, and XSLT to help us. Previously, I had gotten accustomed to using the wonderful XML::XPath Perl module to accomplish queries like the one above, but after finding a TextMate Plugin that could run an XPath expression against my current window, I stopped writing one-off Perl scripts to query XML. And I just found out about XMLStarlet which is installing as I type this and which I look forward to using in the future. JSON Solutions? So this leads me to my question: are there any tools like this for JSON? It's only a matter of time before some investigation task requires me to do similar queries on JSON files, and without tools like XPath and XSLT, such a task will be a lot harder. If I had a bunch of JSON that looked like this: { "firstName": "Bender", "lastName": "Robot", "age": 200, "address": { "streetAddress": "123", "city": "New York", "state": "NY", "postalCode": "1729" }, "phoneNumber": [ { "type": "home", "number": "666 555-1234" }, { "type": "fax", "number": "666 555-4567" } ] } And wanted to find the average number of phone numbers each person had, I could do something like this with XPath: fn:avg(/fn:count(phoneNumber)) Questions Are there any command-line tools that can "query" JSON files in this way? If you have to process a bunch of JSON files on a Unix command line, what tools do you use? Heck, is there even work being done to make a query language like this for JSON? If you do use tools like this in your day-to-day work, what do you like/dislike about them? Are there any gotchas? I'm noticing more and more data serialization is being done using JSON, so processing tools like this will be crucial when analyzing large data dumps in the future. Language libraries for JSON are very strong and it's easy enough to write scripts to do this sort of processing, but to really let people play around with the data shell tools are needed. Related Questions Grep and Sed Equivalent for XML Command Line Processing Is there a query language for JSON? JSONPath or other XPath like utility for JSON/Javascript; or Jquery JSON

    Read the article

< Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >