Search Results

Search found 97822 results on 3913 pages for 'static code analysis'.

Page 69/3913 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • Linq2sql code generator misbehaving

    - by Martin
    Sometime the linq2sql just makes its mind up about things. I've been pulling my hair for the past hours trying to work out what I'm doing differently from all the other times when I don't get ForeignKeyReferenceAlreadyHasValueException. Turns out that if (this._Activity.HasLoadedOrAssignedValue) { throw new System.Data.Linq.ForeignKeyReferenceAlreadyHasValueException(); } is present on my primary key in this particular table, and in no other. No matter what I do with the association, I've even tried deleting and dragging the thing back to the designer, it's still there and I'm sure it's not supposed to be. I know why, of course, but I don't know why, so to speak. A while back the association went the other way. Whereas I've left that era behind me, the code generator seems to exhibit phantom pains. The same phenomenon, is responsible for me having to change the namespace in the designer.cs everytime I make changes in the designer. I made the mistake of renaming my namespace and the code generator just doesn't get it. Somebody please help this poor boy out.

    Read the article

  • How to debug/break in codedom compiled code

    - by Jason Coyne
    I have an application which loads up c# source files dynamically and runs them as plugins. When I am running the main application in debug mode, is it possible to debug into the dynamic assembly? Obviously setting breakpoints is problematic, since the source is not part of the original project, but should I be able to step into, or break on exceptions for the code? Is there a way to get codedom to generate PDBs for this or something? Here is the code I am using for dynamic compliation. CSharpCodeProvider codeProvider = new CSharpCodeProvider(new Dictionary<string, string>() { { "CompilerVersion", "v3.5" } }); //codeProvider. ICodeCompiler icc = codeProvider.CreateCompiler(); CompilerParameters parameters = new CompilerParameters(); parameters.GenerateExecutable = false; parameters.GenerateInMemory = true; parameters.CompilerOptions = string.Format("/lib:\"{0}\"", Application.StartupPath); parameters.ReferencedAssemblies.Add("System.dll"); parameters.ReferencedAssemblies.Add("System.Core.dll"); CompilerResults results = icc.CompileAssemblyFromSource(parameters, Source); DLL.CreateInstance(t.FullName, false, BindingFlags.Default, null, new object[] { engine }, null, null);

    Read the article

  • Code Golf: Rotating Maze

    - by trinithis
    Code Golf: Rotating Maze Make a program that takes in a file consisting of a maze. The maze has walls given by '#'. The maze must include a single ball, given by a 'o' and any number of holes given by a '@'. The maze file can either be entered via command line or read in as a line through standard input. Please specify which in your solution. Your program then does the following: 1: If the ball is not directly above a wall, drop it down to the nearest wall. 2: If the ball passes through a hole during step 1, remove the ball. 3: Display the maze. 4: If there is no ball in the maze, exit. 5: Read a line from the standard input. Given a 1, rotate the maze counterclockwise. Given a 2, rotate the maze clockwise. Rotations are done by 90 degrees. It is up to you to decide if extraneous whitespace is allowed. If the user enters other inputs, repeat this step. 6: Goto step 1. You may assume all input mazes are closed. Note, a hole effectively acts as a wall in this regard. You may assume all input mazes have no extraneous whitespace. The shortest source code by character count wins. Example mazes: ###### #o @# ###### ########### #o # # ####### # ###@ # ######### ########################### # # # # @ # # # # ## # # ####o#### # # # # # # ######### # @ ######################

    Read the article

  • Memory leak in Apples 'Scrolling' sample code

    - by John
    Hi All, I'm using code based on Apple's "Scrolling" sample code - here's where I have a problem: // load all the images from our bundle and add them to the scroll view NSUInteger i; for (i = 1; i <= jNumImages; i++) { NSString *imageName = [NSString stringWithFormat:@"page%d.png", i]; UIImage *image = [UIImage imageNamed:imageName]; UIImageView *imageView2 = [[UIImageView alloc] initWithImage:image]; The UIImageView causes a leak - it doesn't seem to be releasing (though I do state [imageView2 release]; after adding the imageView as a subView to scrollView2 I have say 15 chapters of a book each held in a nav bar, each containing a scroll view with one chapters worth of these image views (each image is a page). When I get to around the second last chapter the app crashes due to the memory leaks... really annoying! I think it might be because imageView's been alloc'd twice (in alloc and in addSubView) but i'm not sure.... tried releasing twice but it didn't seem to help. any pointers? Thanks in advance ^.^

    Read the article

  • Remove redundant SQL code

    - by Dave Jarvis
    Code The following code calculates the slope and intercept for a linear regression against a slathering of data. It then applies the equation y = mx + b against the same result set to calculate the value of the regression line for each row. Can the two separate sub-selects be joined so that the data and its slope/intercept are calculated without executing the data gathering part of the query twice? SELECT AVG(D.AMOUNT) as AMOUNT, Y.YEAR * ymxb.SLOPE + ymxb.INTERCEPT as REGRESSION_LINE, Y.YEAR as YEAR, MAKEDATE(Y.YEAR,1) as AMOUNT_DATE FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D, (SELECT ((avg(t.AMOUNT * t.YEAR)) - avg(t.AMOUNT) * avg(t.YEAR)) / (stddev( t.AMOUNT ) * stddev( t.YEAR )) as CORRELATION, ((sum(t.YEAR) * sum(t.AMOUNT)) - (count(1) * sum(t.YEAR * t.AMOUNT))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as SLOPE, ((sum( t.YEAR ) * sum( t.YEAR * t.AMOUNT )) - (sum( t.AMOUNT ) * sum(power(t.YEAR, 2)))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as INTERCEPT FROM ( SELECT AVG(D.AMOUNT) as AMOUNT, Y.YEAR as YEAR, MAKEDATE(Y.YEAR,1) as AMOUNT_DATE FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D WHERE $X{ IN, C.ID, CityCode } AND SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < $P{Radius} AND S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND Y.YEAR BETWEEN 1900 AND 2009 AND M.YEAR_REF_ID = Y.ID AND M.CATEGORY_ID = $P{CategoryCode} AND M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR ) t ) ymxb WHERE $X{ IN, C.ID, CityCode } AND SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < $P{Radius} AND S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND Y.YEAR BETWEEN 1900 AND 2009 AND M.YEAR_REF_ID = Y.ID AND M.CATEGORY_ID = $P{CategoryCode} AND M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR Question How do I execute the duplicate bits only once per query, instead of twice? The duplicate bit is the WHERE clause: $X{ IN, C.ID, CityCode } AND SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < $P{Radius} AND S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND Y.YEAR BETWEEN 1900 AND 2009 AND M.YEAR_REF_ID = Y.ID AND M.CATEGORY_ID = $P{CategoryCode} AND M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' Related http://stackoverflow.com/questions/1595659/how-to-eliminate-duplicate-calculation-in-sql Thank you!

    Read the article

  • Code-Golf: one line PHP syntax

    - by Kendall Hopkins
    Explanation PHP has some holes in its' syntax and occasionally in development a programmer will step in them. This can lead to much frustration as these syntax holes seem to exist for no reason. For example, one can't easily create an array and access an arbitrary element of that array on the same line (func1()[100] is not valid PHP syntax). The workaround for this issue is to use a temporary variable and break the statement into two lines, but sometimes that can lead to very verbose, clunky code. Challenge I know of a few of these holes (I'm sure there are more). It is quite hard to even come up with a solution, let alone in a code-golf style. Winner is the person with in the least characters total for all four Syntax Holes. Rules Statement must be one line in this form: $output = ...;, where ... doesn't contain any ;'s. Only use standard library functions (no custom functions allowed) Statement works identically to the assumed functional of the non-working syntax (even in cases that it fails). Statement must run without syntax error of any kind with E_STRICT | E_ALL. Syntax Holes $output = func_return_array()[$key]; - accessing an arbitrary offset (string or integer) of the returned array of a function $output = new {$class_base.$class_suffix}(); - arbitrary string concatenation being used to create a new class $output = {$func_base.$func_suffix}(); - arbitrary string concatenation being called as function $output = func_return_closure()(); - call a closure being returned from another function

    Read the article

  • Code Golf: Find the possible ways on a numpad

    - by ikar
    I was bored today at school and so I tried to amuse myself using my calculator and a "game" I've invented which isn't really a game but keeps the boringness away. Also some time has passed since the last real code-golf here, so I decided to create this one. Imagine a simplified numpad like you know it from your phone (I'll leave the 0 out for this code-golf as it kinda destroys all the fun) 1 2 3 4 5 6 7 8 9 Now the rules of the game were always: At the end every digit must have been visited exactly once You can start at any digit you want You can always move one digit up, down, left or right. You can't move diagonally! There a quite a lot of possible ways (or not; I haven't found out yet), here some trivial examples: > > v v < < > > | The output of the golf-program should look something like the above, I'll try to explain: Symbols: Go right < Go left ^ Go up v Go down | End of the way Example solutions: (Program output can either be the numbers pressed in the right order from beginning point to end, or an (ASCII) picture like above) 147852369 569874123 523698741 So if we speak out the example above it would be: Start at 1, move right to 2, move right to 3, go down to 6, go left to 5, go left to 4, go down to 7, go right to 8 then go right to 9 and we are finished! Now there are many different ways possible: You could as well start at 5 and go around it in a circle. So the task would be: Write a program that can compute (using brute-force or whatever) the possible solutions for the numpad problem described above. (Friendly rethorical question with smiley removed because it made some people think that this is homework)

    Read the article

  • An online php debugger/code editor

    - by Zirak
    It's a simple deal: I'm sometimes in places where I don't have my laptop, and find myself with spare time and an idea for a project. But unfortunately, I can't do anything about it. I tried a variety of solutions, which include running IDEs (like phpstorm or Aptana) on a disc-on-key or cd (very slow and unappealing), trying several online solutions (like http://phpanywhere.net) and found that all of them are either buggy, overloaded or underloaded with features, just difficult to use, require FTP etc etc. All that is required here is a syntax highlighting and debugging alerts; no actual running of code. So the question is split into two: 1)Do you know of a good online php editor that you've used and enjoyed? 2)If no, then how would you go about making one? The second one seems a bit general, so I'll try and expand...It might be a good idea; if you can't find one, make one. The question is about the concept of making a syntax highlighter (shouldn't be too difficult), and the difficult part of catching php errors WITHOUT executing any php code. Thank you in advance.

    Read the article

  • How to run OpenGL code with out compiling?

    - by Ole Jak
    So I have some openGL code (such code for example) /* FUNCTION: YCamera :: CalculateWorldCoordinates ARGUMENTS: x mouse x coordinate y mouse y coordinate vec where to store coordinates RETURN: n/a DESCRIPTION: Convert mouse coordinates into world coordinates */ void YCamera :: CalculateWorldCoordinates(float x, float y, YVector3 *vec) { // START GLint viewport[4]; GLdouble mvmatrix[16], projmatrix[16]; GLint real_y; GLdouble mx, my, mz; glGetIntegerv(GL_VIEWPORT, viewport); glGetDoublev(GL_MODELVIEW_MATRIX, mvmatrix); glGetDoublev(GL_PROJECTION_MATRIX, projmatrix); real_y = viewport[3] - (GLint) y - 1; // viewport[3] is height of window in pixels gluUnProject((GLdouble) x, (GLdouble) real_y, 1.0, mvmatrix, projmatrix, viewport, &mx, &my, &mz); /* 'mouse' is the point where mouse projection reaches FAR_PLANE. World coordinates is intersection of line(camera->mouse) with plane(z=0) (see LaMothe 306) Equation of line in 3D: (x-x0)/a = (y-y0)/b = (z-z0)/c Intersection of line with plane: z = 0 x-x0 = a(z-z0)/c <=> x = x0+a(0-z0)/c <=> x = x0 -a*z0/c y = y0 - b*z0/c */ double lx = fPosition.x - mx; double ly = fPosition.y - my; double lz = fPosition.z - mz; double sum = lx*lx + ly*ly + lz*lz; double normal = sqrt(sum); double z0_c = fPosition.z / (lz/normal); vec->x = (float) (fPosition.x - (lx/normal)*z0_c); vec->y = (float) (fPosition.y - (ly/normal)*z0_c); vec->z = 0.0f; } I want to run It but with out precompiling. Is there any way to do such thing

    Read the article

  • XSD, restrictions and code generation

    - by bob
    Hello, I'm working on some code generation for an existing project and I want to start from a xsd. So I can use tools as Xsd2Code / xsd.exe to generate the code and also the use the xsd to validate the xml. That part works without any problems. I also want to translate some of the restrictions to DataAnnotations (enrich Xsd2Code). For example xs:minInclusive / xs:maxInclusive I can translate to a RangeAttribute. But what to do with custom validation attributes that we created? Can I add custom facets / restrictions? And how? Or is there another solution / best practice. I would like to collect everything in a single (xsd) file so that one file contains the structure of the class (model) including the validation (attributes) that has to be added. <xs:element name="CertainValue"> <xs:simpleType> <xs:restriction base="xs:double"> <xs:minInclusive value="1" /> <xs:maxInclusive value="100" /> <xs_custom:customRule attribute="value" /> </xs:restriction> </xs:simpleType> </xs:element>

    Read the article

  • Android Signal analysis + some filters.

    - by Profete162
    Hello, as the world cup is the main sport event and the Vuvuzelas are the most annoying sound in the world, I had an idea to remove them definitively by reading this new ( http://www.popsci.com/diy/article/2010-06/simple-software-can-filter-out-vuvuzela-whine) that told us that the sound has some frequencies at 233Hz + 466,932,1864Hz. I have already made a lot of Android application by myself but never touching the signal analysis and filtering part, so here are a few questions, I do not ask for precise answer but maybe links and tutorial to find something to work on. I guess that a new Android phone has the CPU and power to make real-time filtering. 1) How can I intercept the sound coming from the Jack microphone - Line-IN plug- ( I plan to link my TV to my phone with Jack to Jack plug). My question is totally software and coding, I have all the wires and adapters to plug a jack into my android phone Line IN. 2) Are there some Fourier analysis librairies, may I have a look to Java libraries on the web and import them to my Android project? I really apologize because my question seem not precise, but I think that would be something great. Thank you for your answers.

    Read the article

  • C++: Code assistance in Netbeans on Linux

    - by Martijn Courteaux
    Hi, My IDE (NetBeans) thinks this is wrong code, but it compiles correct: std::cout << "i = " << i << std::endl; std::cout << add(5, 7) << std::endl; std::string test = "Boe"; std::cout << test << std::endl; He always says: unable to resolve identifier .... (.... = cout, endl, string); So I think it has something to do with the code assistance. I think I have to change/add/remove some folders. Currently, I have this include folders: C compiler: /usr/local/include /usr/lib/gcc/i486-linux-gnu/4.4.3/include /usr/lib/gcc/i486-linux-gnu/4.4.3/include-fixed /usr/include C++ compiler: /usr/include/c++/4.4.3 /usr/include/c++/4.4.3/i486-linux-gnu /usr/include/c++/4.4.3/backward /usr/local/include /usr/lib/gcc/i486-linux-gnu/4.4.3/include /usr/include Thanks

    Read the article

  • Optimization in Common Decalaration

    - by Pratik
    Its a 3-tier ASP.NET Website Project In Data Layer there is class "Common Decalaration" in which lot of common things are mentioned. Something this way : public class CommonDeclartion { #region Common Messages public const string RECORD_INSERT_MSG = "Record Inserted Successfully "; public const string RECORD_UPDATE_MSG = "Record Updated Successfully"; public const string RECORD_DELETE_MSG = "Record Deleted Successfully"; public const string ERROR_MSG = "Error Ocuured while Perfoming This Action."; public const string UserID_Incorrect = "Please Enter The Correct User ID."; public const string RECORD_ALREADY_EXIT = "Record Already Exit"; public const string NO_RECORD = "No Record found."; #endregion } Can this be more optimized in terms of : 1.Perfomance 2.Security(if any) 3.Code Readablity or Reusablity I thought of using enum but can't figure that out : enum CommonMessages { RECORD_INSERT_MSG "Record Inserted Successfully.", RECORD_UPDATE_MSG "Record Updated Successfully.", RECORD_DELETE_MSG "Record Deleted Successfully.", ERROR_MSG "Error Ocuured while Perfoming This Action.", UserID_Incorrect "Please Enter The Correct User ID.", RECORD_ALREADY_EXIT "Record Already Exit.", NO_RECORD "No Record found.", } or else should keep them in some collections like dictionary/NameValueCollection or so or i have to keep them in XML in form of key/value pair and reterive from it ? What can be better way keeping in mind 1.Perfomance 2.Security(if any) 3.Code Readablity or Reusablity

    Read the article

  • Statistical analysis on large data set to be published on the web

    - by dassouki
    I have a non-computer related data logger, that collects data from the field. This data is stored as text files, and I manually lump the files together and organize them. The current format is through a csv file per year per logger. Each file is around 4,000,000 lines x 7 loggers x 5 years = a lot of data. some of the data is organized as bins item_type, item_class, item_dimension_class, and other data is more unique, such as item_weight, item_color, date_collected, and so on ... Currently, I do statistical analysis on the data using a python/numpy/matplotlib program I wrote. It works fine, but the problem is, I'm the only one who can use it, since it and the data live on my computer. I'd like to publish the data on the web using a postgres db; however, I need to find or implement a statistical tool that'll take a large postgres table, and return statistical results within an adequate time frame. I'm not familiar with python for the web; however, I'm proficient with PHP on the web side, and python on the offline side. users should be allowed to create their own histograms, data analysis. For example, a user can search for all items that are blue shipped between week x and week y, while another user can search for sort the weight distribution of all items by hour for all year long. I was thinking of creating and indexing my own statistical tools, or automate the process somehow to emulate most queries. This seemed inefficient. I'm looking forward to hearing your ideas Thanks

    Read the article

  • Oddities in Linq-to-SQL generated code related to property change/changing events

    - by Lasse V. Karlsen
    I'm working on creating my own Linq-to-Sql generated classes in order to learn the concepts behind it all. I have some questions, if anyone knows the answer to one or more of these I'd be much obliged. The code below, and thus the questions, are from looking at code generated by creating a .DBML file in the Visual Studio 2010 designer, and inspecting the .Designer.cs file afterwards. 1. Why is INotifyPropertyChanging not passing the property name The event raising method is defined like this: protected virtual void SendPropertyChanging() Why isn't the name of the property that is changing passed to the event here? It is defined to be part of the EventArgs descendant that is passed to the event handler, but the method only passes an empty such value to it. 2. Why are the EntitySet<X> attach/detach methods not raising property changed? For an EntitySet<X> reference, the following two methods are generated: private void attach_EmailAddress1s(EmailAddress1 entity) { this.SendPropertyChanging(); entity.Person1 = this; } private void detach_EmailAddress1s(EmailAddress1 entity) { this.SendPropertyChanging(); entity.Person1 = null; } Why isn't SendPropertyChanged also called here? I'm sure I have more questions later, but for now these will suffice :)

    Read the article

  • C++: IDE Code assistance (Linux)

    - by Martijn Courteaux
    Hi, My IDE (NetBeans) thinks this is wrong code, but it compiles correct: std::cout << "i = " << i << std::endl; std::cout << add(5, 7) << std::endl; std::string test = "Boe"; std::cout << test << std::endl; He always says: unable to resolve identifier .... (.... = cout, endl, string); So I think it has something to do with the code assistance. I think I have to change/add/remove some folders. Currently, I have this include folders: C compiler: /usr/local/include /usr/lib/gcc/i486-linux-gnu/4.4.3/include /usr/lib/gcc/i486-linux-gnu/4.4.3/include-fixed /usr/include C++ compiler: /usr/include/c++/4.4.3 /usr/include/c++/4.4.3/i486-linux-gnu /usr/include/c++/4.4.3/backward /usr/local/include /usr/lib/gcc/i486-linux-gnu/4.4.3/include /usr/include Thanks

    Read the article

  • Code excavations, wishful invocations, perimeters and domain specific unit test frameworks

    - by RoyOsherove
    One of the talks I did at QCON London was about a subject that I’ve come across fairly recently , when I was building SilverUnit – a “pure” unit test framework for silverlight objects that depend on the silverlight runtime to run. It is the concept of “cogs in the machine” – when your piece of code needs to run inside a host framework or runtime that you have little or no control over for testability related matters. Examples of such cogs and machines can be: your custom control running inside silverlight runtime in the browser your plug-in running inside an IDE your activity running inside a windows workflow your code running inside a java EE bean your code inheriting from a COM+ (enterprise services) component etc.. Not all of these are necessarily testability problems. The main testability problem usually comes when your code actually inherits form something inside the system. For example. one of the biggest problems with testing objects like silverlight controls is the way they depend on the silverlight runtime – they don’t implement some silverlight interface, they don’t just call external static methods against the framework runtime that surrounds them – they actually inherit parts of the framework: they all inherit (in this case) from the silverlight DependencyObject Wrapping it up? An inheritance dependency is uniquely challenging to bring under test, because “classic” methods such as wrapping the object under test with a framework wrapper will not work, and the only way to do manually is to create parallel testable objects that get delegated with all the possible actions from the dependencies.    In silverlight’s case, that would mean creating your own custom logic class that would be called directly from controls that inherit from silverlight, and would be tested independently of these controls. The pro side is that you get the benefit of understanding the “contract” and the “roles” your system plays against your logic, but unfortunately, more often than not, it can be very tedious to create, and may sometimes feel unnecessary or like code duplication. About perimeters A perimeter is that invisible line that your draw around your pieces of logic during a test, that separate the code under test from any dependencies that it uses. Most of the time, a test perimeter around an object will be the list of seams (dependencies that can be replaced such as interfaces, virtual methods etc.) that are actually replaced for that test or for all the tests. Role based perimeters In the case of creating a wrapper around an object – one really creates a “role based” perimeter around the logic that is being tested – that wrapper takes on roles that are required by the code under test, and also communicates with the host system to implement those roles and provide any inputs to the logic under test. in the image below – we have the code we want to test represented as a star. No perimeter is drawn yet (we haven’t wrapped it up in anything yet). in the image below is what happens when you wrap your logic with a role based wrapper – you get a role based perimeter anywhere your code interacts with the system: There’s another way to bring that code under test – using isolation frameworks like typemock, rhino mocks and MOQ (but if your code inherits from the system, Typemock might be the only way to isolate the code from the system interaction.   Ad-Hoc Isolation perimeters the image below shows what I call ad-hoc perimeter that might be vastly different between different tests: This perimeter’s surface is much smaller, because for that specific test, that is all the “change” that is required to the host system behavior.   The third way of isolating the code from the host system is the main “meat” of this post: Subterranean perimeters Subterranean perimeters are Deep rooted perimeters  - “always on” seams that that can lie very deep in the heart of the host system where they are fully invisible even to the test itself, not just to the code under test. Because they lie deep inside a system you can’t control, the only way I’ve found to control them is with runtime (not compile time) interception of method calls on the system. One way to get such abilities is by using Aspect oriented frameworks – for example, in SilverUnit, I’ve used the CThru AOP framework based on Typemock hooks and CLR profilers to intercept such system level method calls and effectively turn them into seams that lie deep down at the heart of the silverlight runtime. the image below depicts an example of what such a perimeter could look like: As you can see, the actual seams can be very far away form the actual code under test, and as you’ll discover, that’s actually a very good thing. Here is only a partial list of examples of such deep rooted seams : disabling the constructor of a base class five levels below the code under test (this.base.base.base.base) faking static methods of a type that’s being called several levels down the stack: method x() calls y() calls z() calls SomeType.StaticMethod()  Replacing an async mechanism with a synchronous one (replacing all timers with your own timer behavior that always Ticks immediately upon calls to “start()” on the same caller thread for example) Replacing event mechanisms with your own event mechanism (to allow “firing” system events) Changing the way the system saves information with your own saving behavior (in silverunit, I replaced all Dependency Property set and get with calls to an in memory value store instead of using the one built into silverlight which threw exceptions without a browser) several questions could jump in: How do you know what to fake? (how do you discover the perimeter?) How do you fake it? Wouldn’t this be problematic  - to fake something you don’t own? it might change in the future How do you discover the perimeter to fake? To discover a perimeter all you have to do is start with a wishful invocation. a wishful invocation is the act of trying to invoke a method (or even just create an instance ) of an object using “regular” test code. You invoke the thing that you’d like to do in a real unit test, to see what happens: Can I even create an instance of this object without getting an exception? Can I invoke this method on that instance without getting an exception? Can I verify that some call into the system happened? You make the invocation, get an exception (because there is a dependency) and look at the stack trace. choose a location in the stack trace and disable it. Then try the invocation again. if you don’t get an exception the perimeter is good for that invocation, so you can move to trying out other methods on that object. in a future post I will show the process using CThru, and how you end up with something close to a domain specific test framework after you’re done creating the perimeter you need.

    Read the article

  • Willy Rotstein on Analytics and Social Media in Retail

    - by sarah.taylor(at)oracle.com
    Recently I came across a presentation from Dan Zarrella on "The Science of Retweets. (http://www.slideshare.net/HubSpot/the-science-of-retweets-with-dan-zarrella). It is an insightful, fact-based analysis of how tweets propagate and what makes them successful. The analysis is of course very interesting for those of us interested Tweeting. However, what really caught my attention is how well it illustrates, form a very different angle, some of the issues I am discussing with retailers these days. In particular the opportunities that e-commerce and social media open to those retailers with the appetite and vision to tackle the associated analytical challenges. And these challenges are of course not straightforward.   In his presentation Dan introduces the concept of Observability, I haven't had the opportunity to discuss with Dan his specific definition for the term. However, in practical retail terms, I would say that it means that through social media (and other web channels such as search) we can analyze and track processes by measuring Indicators that were not measurable before. The focus is in identifying patterns across a large number of consumers rather than what a particular individual "Likes".   The potential impact for retailers is huge. It opens the opportunity to monitor changes in consumer preference  and plan the business accordingly. And you can do this almost "real time" rather than through infrequent surveys that provide a "rear view" picture of your consumer behaviour. For instance, you could envision identifying when a particular set of fashion styles are breaking out from the pack, and commit a re-buy. Or you could monitor when the preference for a specific mobile device has declined and hence markdowns should be considered; or how demand for a specific ready-made food typically flows across regions and manage the inventory accordingly. Search, blogging, website and store data may need to be considered in identifying these trends. The data volumes involved are huge (check Andrea Morgan's recent post on "Big Data" in retail) but so are the benefits. As Andrea says, for the first time we can start getting insight into "Why" the business is performing in a certain way rather than just reporting on what is happening. And it is not just about the data volumes. Tackling the challenge also calls for integrated planning systems that can bring data and insight into the context of the Decision Making process Buyers, Merchandisers and Supply Chain managers are following. I strongly believe that only when data and process come together you can move from the anecdotal to systematically improving business performance.   I would love to hear your opinions on these trends and where you think Retail is heading to exploit these topics - please email me: [email protected]

    Read the article

  • #SSAS #Tabular Workshop and Community Events in Netherlands and Denmark

    - by Marco Russo (SQLBI)
    Next week I will finally start the roadshow of the SSAS Tabular Workshop, a 2-day seminar about the new BISM Tabular model for Analysis Services that has been introduced in SQL Server 2012. During these roadshows, we always try to arrange some speeches at local community events in the evening - we already defined for Copenhagen, we have some logistic issue in Amsterdam that we're trying to solve. Here is the timetable: Netherlands SSAS Workshop in Amsterdam, NL – April 16-17, 2012 2-day seminar, I and Alberto will be the trainers for this event, register here We're trying to manage a Community event but we still don't have a confirmation, stay tuned        Denmark SSAS Workshop in Copenhagen, DK – April 26-27, 2012 2-day seminar, I and Alberto will be the trainers for this event, register here Community event on April 26, 2012 This event will run in Hellerup, at Microsoft venue All details available here: http://msbip.dk/events/26/msbip-mode-nr-5/ People from Sweden are welcome! Just register to this private group on LinkedIn in order to announce your presence, so we’ll know how many people will attend In community events we’ll deliver two speeches – here are the descriptions: Inside xVelocity (VertiPaq) PowerPivot and BISM Tabular models in Analysis Services share a great columnar-based database engine called xVelocity in-memory analytics engine (VertiPaq). If you want to improve performance and optimize memory used, you have to understand some basic principles about how this engine works, how data is compressed, and how you can design a data model for better optimization. Prepare yourself to change your mind. xVelocity optimization techniques might seem counterintuitive and are absolutely different than OLAP and SQL ones! Choosing between Tabular and Multidimensional You have a new project and you have to make an important decision upfront. Should you use Tabular or Multidimensional? It is not easy to answer, because sometime there is a clear choice, but most of the times both decisions might be correct, at least at the beginning. In this session we’ll help you making an informed decision, correctly evaluating pros and cons of each one according to common scenarios, considering both short-term and long-term consequences of your choice. I hope to meet many people in this first dates. We have many other events coming in May and June, including an online event (for US time zones), and you can also attend our PreCon Day at TechEd US in Orland (PRC06) or TechEd Europe in Amsterdam. I’ll be a good customer for airline companies in the next three months! I’m just sorry that I hadn’t time to write other articles in the last month, but I’m accumulating material that I will need to write down during some flight – stay tuned…

    Read the article

  • Code Contracts: How they look after compiling?

    - by DigiMortal
    When you are using new tools that make also something at code level then it is good idea to check out what additions are made to code during compilation. Code contracts have simple syntax when we are writing code at Visual Studio but what happens after compilation? Are our methods same as they look in code or are they different after compilation? In this posting I will show you how code contracts look after compiling. In my previous examples about code contracts I used randomizer class with method called GetRandomFromRangeContracted. public int GetRandomFromRangeContracted(int min, int max) {     Contract.Requires<ArgumentOutOfRangeException>(         min < max,         "Min must be less than max"     );       Contract.Ensures(         Contract.Result<int>() >= min &&         Contract.Result<int>() <= max,         "Return value is out of range"     );       return _generator.Next(min, max); } Okay, it is nice to dream about similar code when we open our assembly with Reflector and disassemble it. But… this time we have something interesting. While reading this code don’t feel uncomfortable about the names of variables. This is disassembled code. .NET Framework internally allows these names. It is our compilators that doesn’t accept them when we are building our code. public int GetRandomFromRangeContracted(int min, int max) {     int Contract.Old(min);     int Contract.Old(max);     if (__ContractsRuntime.insideContractEvaluation <= 4)     {         try         {             __ContractsRuntime.insideContractEvaluation++;             __ContractsRuntime.Requires<ArgumentOutOfRangeException>(                min < max,                "Min must be less than max", "min < max");         }         finally         {             __ContractsRuntime.insideContractEvaluation--;         }     }     try     {         Contract.Old(min) = min;     }     catch (Exception exception1)     {         if (exception1 == null)         {             throw;         }     }     try     {         Contract.Old(max) = max;         catch (Exception exception2)     {         if (exception2 == null)         {             throw;         }     }     int CS$1$0000 = this._generator.Next(min, max);     int Contract.Result<int>() = CS$1$0000;     if (__ContractsRuntime.insideContractEvaluation <= 4)     {         try         {             __ContractsRuntime.insideContractEvaluation++;             __ContractsRuntime.Ensures(                (Contract.Result<int>() >= Contract.Old(min)) &&                (Contract.Result<int>() <= Contract.Old(max)),                "Return value is out of range",                "Contract.Result<int>() >= min && Contract.Result<int>() <= max");         }         finally         {             __ContractsRuntime.insideContractEvaluation--;         }     }     return Contract.Result<int>(); } As we can see then contracts are not simply if-then-else checks and exceptions throwing. We can see that there is counter that is incremented before checks and decremented after these whatever the result of check was. One thing that is annoying for me are null checks for exception1 and exception2. Is there really some situation possible when null is thrown instead of some instance that is Exception or that inherits from exception? Conclusion Code contracts are more complex mechanism that it seems when we look at it on our code level. Internally there are done more things than we know. I don’t say it is wrong, it is just good to know how our code looks after compiling. Looking at this example it is sure we need also performance tests for contracted code to see how heavy is their impact to system performance when we run code that makes heavy use of code contracts.

    Read the article

  • RoboCopy Log File Analysis

    - by BobJim
    Is it possible to analyse the log text file outputted from RoboCopy and extract the lines which are defined as "New Dir" and "Extra Dir"? I would like the line from the log contain all the details returned regarding this "New Dir" or "Extra Dir" The reason for completing this task is to understand how two folder structures have change over time. One version has been kept internally at the parent company, the second has been used by a consultancy. For your information I am using Windows 7.

    Read the article

  • Excel techniques for perfmon csv log file analysis

    - by Aszurom
    I have perfmon running against several servers, where I'm outputting to a .csv file data like CPU %time, memory bytes free, hard disk I/O metrics like s/write and writes/s. The ones graphing the SQL servers are also collecting SQL stats. The web servers are collecting .Net relevant stuff. I am aware of PAL, and used it as a template of what data to capture based on server type actually. I just don't think the output it generates is detailed or flexible enough - but it does a pretty remarkable job of parsing logs and making graphs. I'm borderline incompetent with Excel, so I'm hoping to be directed to some knowledge of how to take a perfmon output .csv and mine it in Excel to produce some numbers that are meaningful to me as a sysadmin. I could of course just pick a range of data and assemble a graph out of that and look for spikes and trends, but I'm convinced there is some technique to this that makes it more manageable than looking at a monsterous spreadsheet of numbers and trying to make graphs of it. Plus, it's pretty time consuming and not something I can do as a "take a glance at the servers" sort of routine. I'm graphing CPU, disk use, network b/sec, etc. in Cacti as well, which is nice for seeing big trends. The problem is that it is 5 minute averages, so a server could have a problem but it's intermittent and washes out in a 5 min average. What do you do with perfmon data that I could learn from?

    Read the article

  • Linux installation analysis

    - by blunders
    "Ending company IT Admin relationship" has a good checklist for taking over an existing IT system, but I'm wondering as it relates to Linux: What is the most effective way to assess the scope of existing custom configurations, installs, scripts, etc done? Is there any software that will check if the kernel, system files, etc mirror the default files for the version installed? At this point I don't know what distro of Linux the server (though using Netcraft I do know the server appears to be Linux) -- so it's possible without knowing that information that this would be a hard question to answer.

    Read the article

  • MySql Data Loss - post mortem analysis - RackSpace Cloud Server

    - by marfarma
    After a recent 'emergency migration' of a RS cloud server, the mysql databases on our server snapshot image proved to be days out of date from the backup date. And yet files that were uploaded through the impacted webapp had been written to the file system. Related metadata that was written to the database was lost, but the files themselves were backed-up. Once I was able to manually access the mysql data files before the mysql server started (server was configured to start mysql on boot), I was able to see that the update time for ib_logfile1, ib_logfile0 and ibdata1 was days old. As with this poster, mysql data loss after server crash, it's as if some caching controller had told the OS / mysql server that it had committed data that was still in cache, and it was lost instead of flushed. I can't quite wrap my head around how the uploaded files got written but the database data did not. I would have thought that any cache would have flushed system wide, rather than process by process. Any suggestions as to how this might have happened?

    Read the article

  • FAT filesystem analysis tool

    - by Andy
    I have a dump a FAT file system. Is there a windows tool I can use to analyse it, including: Provide basic information (sector size etc.) Validate the file system, basic corruption checking Allow the files and directory structure to be viewed and possibly edited (i.e mounting as a windows partition) Thanks, Andy

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >