Search Results

Search found 912 results on 37 pages for 'massive'.

Page 33/37 | < Previous Page | 29 30 31 32 33 34 35 36 37  | Next Page >

  • Creating meaningful routes in wizard style ASP.NET MVC form

    - by R0MANARMY
    I apologize in advance for a long question, figured better have a bit more information than not enough. I'm working on an application with a fairly complex form (~100 fields on it). In order to make the UI a little more presentable the fields are organized into regions and split across multiple (~10) tabs (not unlike this, but each tab does a submit/redirect to next tab). This large input form can also be in one of 3 views (read only, editable, print friendly). The form represents a large domain object (let's call it Foo). I have a controller for said domain object (FooController). It makes sense to me to have the controller be responsible for all the CRUD related operations. Here are the problems I'm having trouble figuring out. Goals: I'd like to keep to conventions so that Foo/Create creates a new record Foo/Delete deletes a record Foo/Edit/{foo_id} takes you to the first tab of the form ...etc I'd like to be able to not repeat the data access code such that I can have Foo/Edit/{foo_id}/tab1 Foo/View/{foo_id}/tab1 Foo/Print/{foo_id}tab1 ...etc use the same data access code to get the data and just specify which view to use to render it. My current implementation has a massive FooController with Create, Delete, Tab1, Tab2, etc actions. Tab actions are split out into separate files for organization (using partial classes, which may or may not be abuse of partial classes). Problem I'm running into is how to organize my controller(s) and routes to make that happen. I have the default route {controller}/{action}/{id} Which handles goal 1 properly but doesn't quite play nice with goal 2. I tried to address goal 2 by defining extra routes like so: routes.MapRoute( "FooEdit", "Foo/Edit/{id}/{action}", new { controller = "Foo", action = "Tab1", mode = "Edit", id = (string)null } ); routes.MapRoute( "FooView", "Foo/View/{id}/{action}", new { controller = "Foo", action = "Tab1", mode = "View", id = (string)null } ); routes.MapRoute( "FooPrint", "Foo/Print/{id}/{action}", new { controller = "Foo", action = "Tab1", mode = "Print", id = (string)null } ); However defining these extra routes causes the Url.Action to generate routs like Foo/Edit/Create instead of Foo/Create. That leads me to believe I designed something very very wrong, but this is my first attempt an asp.net mvc project and I don't know any better. Any advice with this particular situation would be awesome, but feedback on design in similar projects is welcome.

    Read the article

  • Override variables while testing a standalone Perl script

    - by BrianH
    There is a Perl script in our environment that I now need to maintain. It is full of bad practices, including using (and re-using) global variables throughout the script. Before I start making changes to the script, I was going to try to write some test scripts so I can have a good regression base. To do this, I was going to use a method described on this page. I was starting by writing tests for a single subroutine. I put this line somewhat near the top of the script I am testing: return 1 if ( caller() ); That way, in my test script, I can require 'script_to_test.pl'; and it won't execute the whole script. The first subroutine I was going to test makes a lot of use of global variables that are set throughout the script. My thought was to try to override these variables in my test script, something like this: require_ok('script_to_test.pl'); $var_from_other_script = 'Override Value'; ok( sub_from_other_script() ); Unfortunately (for me), the script I am testing has a massive "my" block at the top, where it declares all variables used in the script. This prevents my test script from seeing/changing the variables in the script I'm running tests against. I've played with Exporter, Test::Mock..., and some other modules, but it looks like if I want to be able to change any variables I am going to have to modify the other script in some fashion. My goal is to not change the other script, but to get some good tests running so when I do start changing the other script, I can make sure I didn't break anything. The script is about 10,000 lines (3,000 of them in the main block), so I'm afraid that if I start changing things, I will affect other parts of the code, so having a good test suite would help. Is this possible? Can a calling script modify variables in another script declared with "my"? And please don't jump in with answers like, "Just re-write the script from scratch", etc. That may be the best solution, but it doesn't answer my question, and we don't have the time/resources for a re-write.

    Read the article

  • php adding images to another image, exact positioning

    - by user271619
    I have a cool snippet of code that works well, except one thing. The code will take an icon I want to add to an existing picture. I can position it where I want too! Which is exactly what I need to do. However, I'm stuck on one thing, concerning the placement. The code "starting position" (on the main image: navIcons.png) is from the Bottom Right. I have 2 variables: $move_left = 10; & $move_up = 8;. So, the means I can position the icon.png 10px left, and 8px up, from the bottom right corner. I really really want to start the positioning from the Top Left of the image, so I'm really moving the icon 10px right & 8px down, from the top left position of the main image. Can someone look at my code and see if I'm just missing something that inverts that starting position? function attachIcon($imgname) { $mark = imagecreatefrompng($imgname); imagesavealpha($mark, true); list($icon_width, $icon_height) = getimagesize($imgname); $img = imagecreatefrompng('images/sprites/navIcons.png'); imagesavealpha($img, true); $move_left = 10; $move_up = 9; list($mainpic_width, $mainpic_height) = getimagesize('images/sprites/navIcons.png'); imagecopy($img, $mark, $mainpic_width-$icon_width-$move_left, $mainpic_height-$icon_height-$move_up, 0, 0, $icon_width, $icon_height); imagepng($img); // display the image + positioned icon in the browser //imagepng($img,'newnavIcon.png'); // rewrite the image with icon attached. } header('Content-Type: image/png'); attachIcon('icon.png'); ? For those who are wondering why I'd even bother doing this. In a nutshell, I like to add 16x16 icons to 1 single image, while using css to display that individual icon. This does involve me downloading the image (sprite) and open photoshop, add the new icon (positioning it), and reuploading it to the server. Not a massive ordeal, but just having fun with php.

    Read the article

  • zend_navigation and modules

    - by Grant Collins
    Hi, I am developing an application at the moment with zend and I have seperated the app into modules. The default module is the main site where unlogged in users access and have free reign to look around. When you log in, depending on the user type you either go to module A or module B, which is controlled by simple ACLs. If you have access to Module A you can not access Module B and visa versa. Both user types can see the default module. Now I want to use Zend_Navigation to manage the entire applications navigation in all modules. I am not sure how to go about this, as all the examples that I have seen work within a module or very simple application. I've tried to have my navigation.xml file look like this: <configdata> <navigation> <label>Home</label> <controller>index</controller> <action>index</action> <module>default</module> <pages> <tour> <label>tour</label> <controller>tour</controller> <action>index</action> <module>default</module> </tour> <blog> <label>blog</label> <url>http://blog.mysite.com</url> </blog> <support> <label>Support</label> <controller>support</controller> <action>index</action> <module>default</module> </support> </pages> </navigation> </configdata> This if fine for the default module, but how would I go about the other modules to this navigation page? Each module has it's own home page, and others etc. Would I be better off adding a unique navigation.xml file for each module that is loaded in the preDispatch plugin that I have written to handle my ACLs?? Or keep them in one massive navigation file? Any tips would be fantastic. Thanks, Grant

    Read the article

  • .NET JIT Code Cache leaking?

    - by pitchfork
    We have a server component written in .Net 3.5. It runs as service on a Windows Server 2008 Standard Edition. It works great but after some time (days) we notice massive slowdowns and an increased working set. We expected some kind of memory leak and used WinDBG/SOS to analyze dumps of the process. Unfortunately the GC Heap doesn’t show any leak but we noticed that the JIT code heap has grown from 8MB after the start to more than 1GB after a few days. We don’t use any dynamic code generation techniques by our own. We use Linq2SQL which is known for dynamic code generation but we don’t know if it can cause such a problem. The main question is if there is any technique to analyze the dump and check where all this Host Code Heap blocks that are shown in the WinDBG dumps come from? [Update] In the mean time we did some more analysis and had Linq2SQL as probable suspect, especially since we do not use precompiled queries. The following example program creates exactly the same behaviour where more and more Host Code Heap blocks are created over time. using System; using System.Linq; using System.Threading; namespace LinqStressTest { class Program { static void Main(string[] args) { for (int i = 0; i < 100; ++ i) ThreadPool.QueueUserWorkItem(Worker); while(runs < 1000000) { Thread.Sleep(5000); } } static void Worker(object state) { for (int i = 0; i < 50; ++i) { using (var ctx = new DataClasses1DataContext()) { long id = rnd.Next(); var x = ctx.AccountNucleusInfos.Where(an => an.Account.SimPlayers.First().Id == id).SingleOrDefault(); } } var localruns = Interlocked.Add(ref runs, 1); System.Console.WriteLine("Action: " + localruns); ThreadPool.QueueUserWorkItem(Worker); } static Random rnd = new Random(); static long runs = 0; } } When we replace the Linq query with a precompiled one, the problem seems to disappear.

    Read the article

  • Visual Studio soft-crashing when encountering XAML Errors in initialize.

    - by Aren
    I've been having some serious issues with Visual Studio 2010 as of late. It's been crashing in a peculiar way when I encounter certain types of XAML errors during the InitializeComponent() of a control/window. The program breaks and visual studio gears up like it's catching an exception (because it is) and then stops midway displaying a broken highlight in my XAML file with no details as to what is wrong. Example: There is not pop outs, or details Anywhere about what is wrong, only a callstack that points to my InitializeComponent() call. Now normally I'd just do some trial and error to fix this problem, and find out where i messed up, but the real problem isn't my code. Visual Studio is rendered completely useless at this point. It reports my application still in "Running" mode. The Stop/Break/Restart buttons on the toolbar or in the menus don't do anything (but grey out). Closing the application does not stop this behaviour, closing visual studio gets it stuck in a massive loop where it yells at me complaining every file open is not in the debug project, then repeats this process when i have exausted every open file. I have to force-close devenv.exe, and after this happening 3-4 times in a row it's a lot of wasted time (as my projects are usually pretty big and studio can be quite slow @ loading). To the point Has anyone else experienced this? How can I stop studio from locking up. Can I at LEAST get information out of this beast another way so i can fix my XAML error sooner rather than after 3-4 trial-and-error compiles yielding the same crash? Any & All help would be appreciated. Visual Studio 2010 version: 10.0.30319.1RTM Edit & Update FWIW, mostly the errors that cause this are XamlParseExceptions (I figured this out after i found what was wrong with my XAML). I think I need to be clearer though, Im not looking for the solution to my code problem, as these are usually typos / small things, I'm looking for a solution to VStudio getting all buggered up as a result. The particular error in the above image that 100% for sure caused this was a XamlParseException caused by forgetting a Value attribute on a data trigger. I've fixed that part but it still doesn't tell my why my studio becomes a lump of neutered program when a perfectly normal exception is thrown in the parsing of the XAML. Code that will cause this issue (at least for me) This is the base template WPF Application, with the following Window.xaml code. The problem is a missing Value="True" on the <DataTrigger ...> in the template. It generates a XamlParseException and Visual Studio Crashes as described above when debugging it. Final Notes The following solutions did not help me: Restarting Visual Studio Rebooting Reinstalling Visual Studio

    Read the article

  • Do Websites need Local Databases Anymore?

    - by viatropos
    If there's a better place to ask this, please let me know. Every time I build a new website/blog/shopping-cart/etc., I keep trying to do the following: Extract out common functionality into reusable code (Rubygems and jQuery plugins mostly) If possible, convert that gem into a small service so I never have to deal with a database for the objects involved (by service, I mean something lean and mean, usually built with the Sinatra Web Framework with a few core models). My assumption is, if I can remove dependencies on local databases, that will make it easier and more scalable in the long run (scalable in terms of reusability and manageability, not necessarily database/performance). I'm not sure if that's a good or bad assumption yet. What do you think? I've made this assumption because of the following reason: Most serious database/model functionality has been built on the internet somewhere. Just to name a few: Social Network API: Facebook Messaging API: Twitter Mailing API: Google Event API: Eventbrite Shopping API: Shopify Comment API: Disqus Form API: Wufoo Image API: Picasa Video API: Youtube ... Each of those things are fairly complicated to build from scratch and to make as optimized, simple, and easy to use as those companies have made them. So if I build an app that shows pictures (picasa) on an Event page (eventbrite), and you can see who joined the event (facebook events), and send them emails (google apps api), and have them fill out monthly surveys (wufoo), and watch a video when they're done (youtube), all integrated into a custom, easy to use website, and I can do that without ever creating a local database, is that a good thing? I ask because there's two things missing from the puzzle that keep forcing me to create that local database: Post API RESTful/Pretty Url API While there's plenty of Blogging systems and APIs for them, there is no one place where you can just write content and have it part of some massive thing. For every app, I have to use code for creating pretty/restful urls, and that saves posts. But it seems like that should be a service! Question is, is that what the website is? ...That place to integrate the worlds services for my specific cause... and, sigh, to store posts that only my site has access to. Will everyone always need "their own blog"? Why not just have a profile and write lots of content on an established platform like StackOverflow or Facebook? ... That way I can write apps entirely without a database and know that I'm doing it right. Note: Of course at some point you'd need a database, if you were doing something unique or new. But for the case where you're just rewiring information or creating things like videos, events, and products, is it really necessary anymore??

    Read the article

  • Would a Centralized Blogging Service Work?

    - by viatropos
    If there's a better place to ask this, please let me know. Every time I build a new website/blog/shopping-cart/etc., I keep trying to do the following: Extract out common functionality into reusable code (Rubygems and jQuery plugins mostly) If possible, convert that gem into a small service so I never have to deal with a database for the objects involved (by service, I mean something lean and mean, usually built with the Sinatra Web Framework with a few core models. My assumption is, if I can remove dependencies on local databases, that will make it easier and more scalable in the long run (scalable in terms of reusability and manageability, not necessarily database/performance). I'm not sure if that's a good or bad assumption yet. What do you think? I've made this assumption because of the following reason: Most serious database/model functionality has been built on the internet somewhere. Just to name a few: Social Network API: Facebook Messaging API: Twitter Mailing API: Google Event API: Eventbrite Shopping API: Shopify Comment API: Disqus Form API: Wufoo Image API: Picasa Video API: Youtube ... Each of those things are fairly complicated to build from scratch and to make as optimized, simple, and easy to use as those companies have. So if I build an app that shows pictures (picasa) on an Event page (eventbrite), and you can see who joined the event (facebook events), and send them emails (google apps api), and have them fill out monthly surveys (wufoo), and watch a video when they're done (youtube), all integrated into a custom, easy to use website, and I can do that without ever creating a local database, is that a good thing? I ask because there's two things missing from the puzzle that keep forcing me to create that local database: Post API RESTful/Pretty Url API While there's plenty of Blogging systems and APIs for them, there is no one place where you can just write content and have it part of some massive thing. For every app, I have to use code for creating pretty/restful urls, and that saves posts. But it seems like that should be a service! Question is, is that the main point of a website? Will everyone always need "their own blog"? Why not just have a profile and write lots of content on an established platform like StackOverflow or Facebook?

    Read the article

  • WPF performance for large number of elements on the screen

    - by Mark
    Im currently trying to create a Scene in WPF where I have around 250 controls on my screen and the user can Pan and Zoom in and out of these controls using the mouse. I have run the WPF Performance Suite tools on the application when there are a large number of these controls on the screen (i.e. when the user has zoomed right out) the FPS drops down to around 15 which is not very good. Here is the basic outline of the XAML: <Window> <Window.Resources> <ControlTemplate x:Key="LandTemplate" TargetType="{x:Type local:LandControl}"> <Canvas> <Path Fill="White" Stretch="Fill" Stroke="Black" StrokeThickness="1" Width="55.5" Height="74.687" Data="M0.5,0.5 L55,0.5 L55,74.187 L0.5,74.187 z"/> <Canvas x:Name="DetailLevelCanvas" Width="24.5" Height="21" Canvas.Left="15.306" Canvas.Top="23.972"> <TextBlock Width="21" Height="14" Text="712" TextWrapping="Wrap" Foreground="Black"/> <TextBlock Width="17.5" Height="7" Canvas.Left="7" Canvas.Top="14" Text="614m2" TextWrapping="Wrap" FontSize="5.333" Foreground="Black"/> </Canvas> </Canvas> </ControlTemplate> </Window.Resources> ... <local:LandControl Width="55.5" Height="74.552" Canvas.Top="xxx" Template=" {StaticResource LandTemplate}" RenderTransformOrigin="0.5,0.5" Canvas.Left="xxx"> <local:LandControl Width="55.5" Height="74.552" Canvas.Top="xxx" Template=" {StaticResource LandTemplate}" RenderTransformOrigin="0.5,0.5" Canvas.Left="xxx"> <local:LandControl Width="55.5" Height="74.552" Canvas.Top="xxx" Template=" {StaticResource LandTemplate}" RenderTransformOrigin="0.5,0.5" Canvas.Left="xxx"> <local:LandControl Width="55.5" Height="74.552" Canvas.Top="xxx" Template=" {StaticResource LandTemplate}" RenderTransformOrigin="0.5,0.5" Canvas.Left="xxx"> ... and so on... </Window> Ive tried to minimise the details in the control template and I even did a massive find and replace of the controls to just put their raw elements inline instead of using a template, but with no noticeable performance improvements. I have seen other SO questions about this and people say to do custom drawing, but I dont really see how that make sense when you have to zoom and pan like I do. If anyone can help out here, that would be great! Mark

    Read the article

  • Python. How to iterate through a list of lists looking for a partial match

    - by Becca Millard
    I'm completely stuck on this, without even an idea about how to wrap my head around the logic of this. In the first half of the code, I have successfully generation a list of (thousands of) lists of players names and efficiency scores: eg name_order_list = [["Bob", "Farley", 12.345], ["Jack", "Donalds", 14.567], ["Jack", "Donalds", 13.421], ["Jack", "Donalds", 15.232],["Mike", "Patricks", 10.543]] What I'm trying to do, is come up with a way to make a list of lists of the average efficiency of each player. So in that example, Jack Donalds appears multiple times, so I'd want to recognize his name somehow and average out the efficiency scores. Then sort that new list by efficiency, rather than name. So then the outcome would be like: average_eff_list = [[12.345, "Bob", "Farley"], [14.407, "Jack", "Donalds"], [10.543, "Mike", "Patricks"]] Here's what I tried (it's kind of a mess, but should be readable): total_list = [] odd_lines = [name_order_list[i] for i in range(len(name_order_list)) if i % 2 == 0] even_lines = [name_order_list[i] for i in range(len(name_order_list)) if i % 2 == 1] i = 0 j = i-1 while i <= 10650: iteration = 2 total_eff = 0 while odd_lines[i][0:2] == even_lines[i][0:2]: if odd_lines[i][0:2] == even_lines[j][0:2]: if odd_lines[j][0:2] != even_lines[j][0:2]: total_eff = even_lines[j][2]/(iteration-1) iteration -= 1 #account fr the single (rather than dual) additional entry else: total_eff = total_eff if iteration == 2: total_eff = (odd_lines[i][2] + even_lines[i][2]) / iteration else: total_eff = ((total_eff * (iteration - 2)) + (odd_lines[i][2] + even_lines[i][2])) / iteration iteration += 2 i += 1 j += 1 if i > 10650: break else: if odd_lines[i][0:2] == even_lines[j][0:2]: if odd_lines[j][0:2] != even_lines[j][0:2]: total_eff = (odd_lines[i][2] + even_lines[j][2]) / iteration else: total_eff = ((total_eff * (iteration -2)) + odd_lines[i][2]) / (iteration - 1) if total_eff == 0: #there's no match at all total_odd = [odd_lines[i][2], odd_lines[i][0], odd_lines[i][1]] total_list.append(total_odd) if even_lines[i][0:2] != odd_lines[i+1][0:2]: total_even = [even_lines[i][2], even_lines[i][0], even_lines[i][1]] else: total = [total_eff, odd_lines[i][0], odd_lines[i][1]] total_list.append(total) i += 1 if i > 10650: break else: print(total_list) Now, this runs well enough (doesn't get stuck or print someone's name multiple times) but the efficiency values are off by a large amount, so I know that scores are getting missed somewhere. This is a problem with my logic, I think, so any help would be greatly appreciated. As would any advice about how to loop through that massive list in a smarter way, since I'm sure there is one... EIDT: for this exercise, I need to keep it all in a list format. I can make new lists, but no using dictionaries, classes, etc.

    Read the article

  • Importing a large delimited file to a MySQL table

    - by Tom
    I have this large (and oddly formatted txt file) from the USDA's website. It is the NUT_DATA.txt file. But the problem is that it is almost 27mb! I was successful in importing the a few other smaller files, but my method was using file_get_contents which it makes sense why an error would be thrown if I try to snag 27+ mb of RAM. So how can I import this massive file to my MySQL DB without running into a timeout and RAM issue? I've tried just getting one line at a time from the file, but this ran into timeout issue. Using PHP 5.2.0. Here is the old script (the fields in the DB are just numbers because I could not figure out what number represented what nutrient, I found this data very poorly document. Sorry about the ugliness of the code): <? $file = "NUT_DATA.txt"; $data = split("\n", file_get_contents($file)); // split each line $link = mysql_connect("localhost", "username", "password"); mysql_select_db("database", $link); for($i = 0, $e = sizeof($data); $i < $e; $i++) { $sql = "INSERT INTO `USDA` (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17) VALUES("; $row = split("\^", trim($data[$i])); // split each line by carrot for ($j = 0, $k = sizeof($row); $j < $k; $j++) { $val = trim($row[$j], '~'); $val = (empty($val)) ? 0 : $val; $sql .= ((empty($val)) ? 0 : $val) . ','; // this gets rid of those tildas and replaces empty strings with 0s } $sql = rtrim($sql, ',') . ");"; mysql_query($sql) or die(mysql_error()); // query the db } echo "Finished inserting data into database.\n"; mysql_close($link); ?>

    Read the article

  • The best way to separate admin functionality from a public site?

    - by AndrewO
    I'm working on a site that's grown both in terms of user-base and functionality to the point where it's becoming evident that some of the admin tasks should be separate from the public website. I was wondering what the best way to do this would be. For example, the site has a large social component to it, and a public sales interface. But at the same time, there's back office tasks, bulk upload processing, dashboards (with long running queries), and customer relations tools in the admin section that I would like to not be effected by spikes in public traffic (or effect the public-facing response time). The site is running on a fairly standard Rails/MySQL/Linux stack, but I think this is more of an architecture problem than an implementation one: mainly, how does one keep the data and business logic in sync between these different applications? Some strategies that I'm evaluating: 1) Create a slave database of the public facing database on another machine. Extract out all of the model and library code so that it can be shared between the applications. Create new controllers and views for the admin interfaces. I have limited experience with replication and am not even sure that it's supposed to be used this way (most of the time I've seen it, it's been for scaling out the read capabilities of the same application, rather than having multiple different ones). I'm also worried about the potential for latency issues if the slave is not on the same network. 2) Create new more task/department-specific applications and use a message oriented middleware to integrate them. I read Enterprise Integration Patterns awhile back and they seemed to advocate this for distributed systems. (Alternatively, in some cases the basic Rails-style RESTful API functionality might suffice.) But, I have nightmares about data synchronization issues and the massive re-architecting that this would entail. 3) Some mixture of the two. For example, the only public information necessary for some of the back office tasks is a read-only completion time or status. Would it make sense to have that on a completely separate system and send the data to public? Meanwhile, the user/group admin functionality would be run on a separate system sharing the database? The downside is, this seems to keep many of the concerns I have with the first two, especially the re-architecting. I'm sure the answers are going to be highly dependent on a site's specific needs, but I'd love to hear success (or failure) stories.

    Read the article

  • Speeding up templates in GAE-Py by aggregating RPC calls

    - by Sudhir Jonathan
    Here's my problem: class City(Model): name = StringProperty() class Author(Model): name = StringProperty() city = ReferenceProperty(City) class Post(Model): author = ReferenceProperty(Author) content = StringProperty() The code isn't important... its this django template: {% for post in posts %} <div>{{post.content}}</div> <div>by {{post.author.name}} from {{post.author.city.name}}</div> {% endfor %} Now lets say I get the first 100 posts using Post.all().fetch(limit=100), and pass this list to the template - what happens? It makes 200 more datastore gets - 100 to get each author, 100 to get each author's city. This is perfectly understandable, actually, since the post only has a reference to the author, and the author only has a reference to the city. The __get__ accessor on the post.author and author.city objects transparently do a get and pull the data back (See this question). Some ways around this are Use Post.author.get_value_for_datastore(post) to collect the author keys (see the link above), and then do a batch get to get them all - the trouble here is that we need to re-construct a template data object... something which needs extra code and maintenance for each model and handler. Write an accessor, say cached_author, that checks memcache for the author first and returns that - the problem here is that post.cached_author is going to be called 100 times, which could probably mean 100 memcache calls. Hold a static key to object map (and refresh it maybe once in five minutes) if the data doesn't have to be very up to date. The cached_author accessor can then just refer to this map. All these ideas need extra code and maintenance, and they're not very transparent. What if we could do @prefetch def render_template(path, data) template.render(path, data) Turns out we can... hooks and Guido's instrumentation module both prove it. If the @prefetch method wraps a template render by capturing which keys are requested we can (atleast to one level of depth) capture which keys are being requested, return mock objects, and do a batch get on them. This could be repeated for all depth levels, till no new keys are being requested. The final render could intercept the gets and return the objects from a map. This would change a total of 200 gets into 3, transparently and without any extra code. Not to mention greatly cut down the need for memcache and help in situations where memcache can't be used. Trouble is I don't know how to do it (yet). Before I start trying, has anyone else done this? Or does anyone want to help? Or do you see a massive flaw in the plan?

    Read the article

  • git filter-branch chmod

    - by Evan Purkhiser
    I accidental had my umask set incorrectly for the past few months and somehow didn't notice. One of my git repositories has many files marked as executable that should be just 644. This repo has one main master branch, and about 4 private feature branches (that I keep rebased on top of the master). I've corrected the files in my master branch by running find -type f -exec chmod 644 {} \; and committing the changes. I then rebased my feature branches onto master. The problem is there are newly created files in the feature branches that are only in that branch, so they weren't corrected by my massive chmod commit. I didn't want to create a new commit for each feature branch that does the same thing as the commit I made on master. So I decided it would be best to go back through to each commit where a file was made and set the permissions. This is what I tried: git filter-branch -f --tree-filter 'chmod 644 `git show --diff-filter=ACR --pretty="format:" --name-only $GIT_COMMIT`; git add .' master.. It looked like this worked, but upon further inspection I noticed that the every commit after a commit containing a new file with the proper permissions of 644 would actually revert the change with something like: diff --git a b old mode 100644 new mode 100755 I can't for the life of me figure out why this is happening. I think I must be mis-understanding how git filter-branch works. My Solution I've managed to fix my problem using this command: git filter-branch -f --tree-filter 'FILES="$FILES "`git show --diff-filter=ACMR --pretty="format:" --name-only $GIT_COMMIT`; chmod 644 $FILES; true' development.. I keep adding onto the FILES variable to ensure that in each commit any file created at some point has the proper mode. However, I'm still not sure I really understand why git tracks the file mode for each commit. I had though that since I had fixed the mode of the file when it was first created that it would stay that mode unless one of my other commits explicit changed it to something else. That did not appear to the be the case. The reason I thought that this would work is from my understanding of rebase. If I go back to HEAD~5 and change a line of code, that change is propagated through, it doesn't just get changed back in HEAD~4.

    Read the article

  • Modular GWT design concerns

    - by GlGuru
    Hi, I have a couple of questions regarding a modular GWT based application framework. I have some ideas about them but being new to the field of web development I feel they are far from ideal. I'd appreciate a few comments and suggestions in this regard. Here are my questions: I am developing a framework which will allow third parties to embed GWT applications into our website and do some communication with them using simple iFrame postMessage. All these third party modules are going to use our SDK which is also GWT based. The problem arises that even though all the modules will be using the same codebase there is going to be a massive overheard in the amount of duplicate Javascript code (i.e. our common SDK code base which is quite large) being downloaded on the client's machine. This is highly redundant and problematic, not only due to the sheer size of duplicate code but, also due to the fact that subsequent updates of the SDK would require the modules to be recompiled which is going to create a DLL hell kind of scenario in the long run. What is the best way of doing this kind of thing? Is there a way where I can have some static GWT code (i.e. the SDK) and the dynamic GWT module refers to it (even if lies on a different domain) and it all work happily? The other part of the problem lies in providing robust scripting front end to the SDK. At first it appears to be trivial since Javascript itself is a scripting language. However, I do not know how to load and call a piece of pure Javascript code at runtime? I am willing to put restrictions on the target Javascript (i.e. having a function main and unique namespace or something). Furthermore the Javascript will come as a string from a database and not as a full URL. If its doable in Javascript how does one get this right in GWT i.e. forcing the compiler to emit a certain function in the generated Javascript? This I believe can be lesser of a problem by having a stub Javascript with all the right requirements which just loads up a GWT generated Javascript. Is any of this possible at all? I generally hate to be this verbose but I hope to find a quick solution to the problem as its holding up my progress. I'd highly appreciate any comments, suggestions and experiences.

    Read the article

  • If we don't like it for the presentation layer, then why do we tolerate it for the behavior layer?

    - by greim
    Suppose CSS as we know it had never been invented, and the closest we could get was to do this: <script> // this is the page's stylesheet $(document).ready(function(){ $('.error').css({'color':'red'}); $('a[href]').css({'textDecoration':'none'}); ... }); </script> If this was how we were forced to write code, would we put up with it? Or would every developer on Earth scream at browser vendors until they standardized upon CSS, or at least some kind of declarative style language? Maybe CSS isn't perfect, but hopefully it's obvious how it's better than the find things, do stuff method shown above. So my question is this. We've seen and tasted of the glory of declarative binding with CSS, so why, when it comes to the behavioral/interactive layer, does the entire JavaScript community seem complacent about continuing to use the kludgy procedural method described above? Why for example is this considered by many to be the best possible way to do things: <script> $(document).ready(function(){ $('.widget').append("<a class='button' href='#'>...</div>"); $('a[href]').click(function(){...}); ... }); </script> Why isn't there a massive push to get XBL2.0 or .htc files or some kind of declarative behavior syntax implemented in a standard way across browsers? Is this recognized as a need by other web development professionals? Is there anything on the horizon for HTML5? (Caveats, disclaimers, etc: I realize that it's not a perfect world and that we're playing the hand we've been dealt. My point isn't to criticize the current way of doing things so much as to criticize the complacency that exists about the current way of doing things. Secondly, event delegation, especially at the root level, is a step closer to having a declarative behavior layer. It solves a subset of the problem, but it can't create UI elements, so the overall problem remains.)

    Read the article

  • Runtime.exec causes duplicate JVM to hang indefinitely until killed (Solaris 10)

    - by John
    All, We are running a J2EE application on WebLogic server 9.2 MP2 with a jrockit 64-bit JVM (27.3.1) on Solaris 10. We call use runtime.exec to call an executable called jfmerge to create PDF documents. We have found that in Solaris, when runtime.exec is called, a duplicate JVM is temporarily spawned to kick off the jfmerge process. While this is inefficient (our JVM is 5 GB, thus the duplicated shell JVM is also 5 GB), the major problem lies in the fact that when there is heavy load on this functionality (PDF generation) in our application, sometimes the duplicated JVM never exits. When the JVM hangs, the servers create large issues (extreme application slowness and terminated user sessions) as the entire duplicate JVM get's all of its 5 GB of process size written to disk swap. We have noted the following hung thread correlated with a hung JVM process until the process is manually killed: "[STUCK] ExecuteThread: '17' for queue: 'weblogic.kernel.Default (self-tuning)'" id=3463 idx=0x158 tid=3460 prio=1 alive, in native, daemon at jrockit/io/FileNativeIO.readBytesPinned(Ljava/io/FileDescriptor;[BII)I(Native Method) at jrockit/io/FileNativeIO.readBytes(FileNativeIO.java:30) at java/io/FileInputStream.readBytes([BII)I(FileInputStream.java) at java/io/FileInputStream.read(FileInputStream.java:194) at java/lang/UNIXProcess$DeferredCloseInputStream.read(UNIXProcess.java:227) at java/io/BufferedInputStream.fill(BufferedInputStream.java:218) at java/io/BufferedInputStream.read(BufferedInputStream.java:235) ^-- Holding lock: java/io/BufferedInputStream@0xfffffffec6510470[thin lock] at gov/v3/common/formgeneration/sessionbean/FormsBean.getProcessStatus(FormsBean.java:809) at gov/v3/common/formgeneration/sessionbean/FormsBean.createPDF(FormsBean.java:750) at gov/v3/common/formgeneration/sessionbean/FormsBean.getTemplateDetails(FormsBean.java:450) at gov/v3/common/formgeneration/sessionbean/FormsBean.generateSinglePDF(FormsBean.java:1371) at gov/v3/common/formgeneration/sessionbean/FormsBean.generatePDF(FormsBean.java:263) at gov/v3/common/formgeneration/sessionbean/FormsBean.endorseDocument(FormsBean.java:2377) at gov/v3/common/formgeneration/sessionbean/Forms_qaco28_EOImpl.endorseDocument(Forms_qaco28_EOImpl.java:214) at gov/v3/delegates/common/FormsAndNoticesDelegate.endorseDocument(FormsAndNoticesDelegate.java:128) at gov/v3/actions/common/EndorseDocumentAction.executeRequest(EndorseDocumentAction.java:68) at gov/v3/fwk/controller/struts/action/V3CommonDispatchAction.dispatchToExecuteMethod(V3CommonDispatchAction.java:532) at gov/v3/fwk/controller/struts/action/V3CommonDispatchAction.executeBaseAction(V3CommonDispatchAction.java:336) at gov/v3/fwk/controller/struts/action/V3BaseDispatchAction.execute(V3BaseDispatchAction.java:69) at org/apache/struts/action/RequestProcessor.processActionPerform(RequestProcessor.java:484) at gov/v3/fwk/controller/struts/requestprocessor/V3TilesRequestProcessor.processActionPerform(V3TilesRequestProcessor.java:384) at org/apache/struts/action/RequestProcessor.process(RequestProcessor.java:274) at org/apache/struts/action/ActionServlet.process(ActionServlet.java:1482) at org/apache/struts/action/ActionServlet.doGet(ActionServlet.java:507) at gov/v3/fwk/controller/struts/servlet/V3ControllerServlet.doGet(V3ControllerServlet.java:110) at javax/servlet/http/HttpServlet.service(HttpServlet.java:743) at javax/servlet/http/HttpServlet.service(HttpServlet.java:856) at weblogic/servlet/internal/StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227) at weblogic/servlet/internal/StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125) at weblogic/servlet/internal/ServletStubImpl.execute(ServletStubImpl.java:283) at weblogic/servlet/internal/ServletStubImpl.execute(ServletStubImpl.java:175) at weblogic/servlet/internal/WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3231) at weblogic/security/acl/internal/AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) at weblogic/security/service/SecurityManager.runAs(SecurityManager.java:121) at weblogic/servlet/internal/WebAppServletContext.securedExecute(WebAppServletContext.java:2002) at weblogic/servlet/internal/WebAppServletContext.execute(WebAppServletContext.java:1908) at weblogic/servlet/internal/ServletRequestImpl.run(ServletRequestImpl.java:1362) at weblogic/work/ExecuteThread.execute(ExecuteThread.java:209) at weblogic/work/ExecuteThread.run(ExecuteThread.java:181) at jrockit/vm/RNI.c2java(JJJJJ)V(Native Method) -- end of trace We would like to do a couple of things: 1.) Prevent the spawning of a duplicate JVM, as we do not need any of it's functions when executing the simple jfmerge executable, and it creates massive overhead. 2.) In the short term at least prevent this duplicate JVM from handing indefinitely.

    Read the article

  • Dynamically find other hosts in a LAN in Java

    - by Federico Cristina
    A while ago I developed a little LAN chat app. in Java which allows chatting with other hosts, send images, etc. Although it was created just for fun, now it's being used where I work. Currently, there is no "chat server" on the app. where each client registers, updates it's status, etc. (I liked the idea of symmetric design and not depending on a server running on some other machine). Instead, each host is a client/server which has a hosts.properties file with the hostname of the other hosts, and - for instance - broadcasts to each one of them when sending a massive message/image/whatever. In the beginning there were just a couple of hosts, so this hosts.properties file wasn't an issue. But as the amount of users increased, the need of updating that file was a bit daunting. So now I've decided to get rid of it, and each time the app. starts, dynammically find the other active hosts. However, I cannot find the correct way of implement this. I've tried starting different threads, each one of them searching for other hosts in a known range of IP addresses. Something like this (simplified for the sake of readability): /** HostsLocator */ public static void searchForHosts(boolean waitToEnd) { for (int i=0; i < MAX_IP; i+= MAX_IP / threads) { HostsLocator detector = new HostsLocator(i, i+(MAX_IP / threads - 1)); // range: from - to new Thread(detector).start(); } } public void run() { for (int i=from; i<=to; i++) findHosts( maskAddress + Integer.toString(i) ); } public static boolean findHosts(String IP) { InetAddress address = InetAddress.getByName(IP); if ( address.isReachable(CONNECTION_TIME_OUT) ) // host found! } However: With a single thread and a low value in CONNECTION_TIME_OUT (500ms) I get wrong Host Not Found status for for hosts actually active. With a high value in CONNECTION_TIME_OUT (5000ms) and only one single thread takes forever to end With several threads I've also found problems similar like the first one, due to collisions. So... I guess there's a better way of solving this problem but I couldn't find it. Any advice? Thanks!

    Read the article

  • Finding contained bordered regions from Excel imports.

    - by dmaruca
    I am importing massive amounts of data from Excel that have various table layouts. I have good enough table detection routines and merge cell handling, but I am running into a problem when it comes to dealing with borders. Namely performance. The bordered regions in some of these files have meaning. Data Setup: I am importing directly from Office Open XML using VB6 and MSXML. The data is parsed from the XML into a dictionary of cell data. This wonks wonderfully and is just as fast as using docmd.transferspreadsheet in Access, but returns much better results. Each cell contains a pointer to a style element which contains a pointer to a border element that defines the visibility and weight of each border (this is how the data is structured inside OpenXML, also). Challenge: What I'm trying to do is find every region that is enclosed inside borders, and create a list of cells that are inside that region. What I have done: I initially created a BFS(breadth first search) fill routine to find these areas. This works wonderfully and fast for "normal" sized spreadsheets, but gets way too slow for imports into the thousands of rows. One problem is that a border in Excel could be stored in the cell you are checking or the opposing border in the adjacent cell. That's ok, I can consolidate that data on import to reduce the number of checks needed. One thing I thought about doing is to create a separate graph that outlines the cells using the borders as my edges and using a graph algorithm to find regions that way, but I'm having trouble figuring out how to implement the algorithm. I've used Dijkstra in the past and thought I could do similar with this. So I can span out using no endpoint to search the entire graph, and if I encounter a closed node I know that I just found an enclosed region, but how can I know if the route I've found is the optimal one? I guess I could flag that to run a separate check for the found closed node to the previous node ignoring that one edge. This could work, but wouldn't be much better performance wise on dense graphs. Can anyone else suggest a better method? Thanks for taking the time to read this.

    Read the article

  • Casting objects in C# (ASP.Net MVC)

    - by Mortanis
    I'm coming from a background in ColdFusion, and finally moving onto something modern, so please bear with me. I'm running into a problem casting objects. I have two database tables that I'm using as Models - Residential and Commercial. Both of them share the majority of their fields, though each has a few unique fields. I've created another class as a container that contains the sum of all property fields. Query the Residential and Commercial, stuff it into my container, cunningly called Property. This works fine. However, I'm having problems aliasing the fields from Residential/Commercial onto Property. It's quite easy to create a method for each property: fillPropertyByResidential(Residential source) and fillPropertyByCommercial(Commercial source), and alias the variables. That also works fine, but quite obviously will copy a bunch of code - all those fields that are shared between the two main Models. So, I'd like a generic fillPropertyBySource() that takes the object, and detects if it's Residential or Commercial, fills the particular fields of each respective type, then do all the fields in common. Except, I gather in C# that variables created inside an If are only in the scope of the if, so I'm not sure how to do this. public property fillPropertyBySource(object source) { property prop = new property(); if (source is Residential) { Residential o = (Residential)source; //Fill Residential only fields } else if (source is Commercial) { Commercial o = (Commercial)source; //Fill Commercial only fields } //Fill fields shared by both prop.price = (int)o.price; prop.bathrooms = (float)o.bathrooms; return prop; } "o" being a Commercial or Residential only exists within the scope of the if. How do I detect the original type of the source object and take action? Bear with me - the shift from ColdFusion into a modern language is pretty..... difficult. More so since I'm used to procedural code and MVC is a massive paradigm shift. Edit: I should include the error: The name 'o' does not exist in the current context For the aliases of price and bathrooms in the shared area.

    Read the article

  • 'Timeout Expired' error against local SQL Express on only 2 LINQ Methods...

    - by Refracted Paladin
    I am going to sum up my problem first and then offer massive details and what I have already tried. Summary: I have an internal winform app that uses Linq 2 Sql to connect to a local SQL Express database. Each user has there own DB and the DB stay in sync through Merge Replication with a Central DB. All DB's are SQL 2005(sp2or3). We have been using this app for over 5 months now but recently our users are getting a Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. Detailed: The strange part is they get that in two differnt locations(2 differnt LINQ Methods) and only the first time they fire in a given time period(~5mins). One LINQ method is pulling all records that match a FK ID and then Manipulating them to form a Heirarchy View for a TreeView. The second is pulling all records that match a FK ID and dumping them into a DataGridView. The only things I can find in common with the 2 are that the first IS an IEnumerable and the second converts itself from IQueryable - IEnumerable - DataTable... I looked at the query's in Profiler and they 'seemed' normal. They are not very complicated querys. They are only pulling back 10 - 90 records, from one table. Any thoughts, suggestions, hints whatever would be greatly appreciated. I am at my wit's end on this.... public IList<CaseNoteTreeItem> GetTreeViewDataAsList(int personID) { var myContext = MatrixDataContext.Create(); var caseNotesTree = from cn in myContext.tblCaseNotes where cn.PersonID == personID orderby cn.ContactDate descending, cn.InsertDate descending select new CaseNoteTreeItem { CaseNoteID = cn.CaseNoteID, NoteContactDate = Convert.ToDateTime(cn.ContactDate). ToShortDateString(), ParentNoteID = cn.ParentNote, InsertUser = cn.InsertUser, ContactDetailsPreview = cn.ContactDetails.Substring(0, 75) }; return caseNotesTree.ToList<CaseNoteTreeItem>(); } AND THIS ONE public static DataTable GetAllCNotes(int personID) { using (var context = MatrixDataContext.Create()) { var caseNotes = from cn in context.tblCaseNotes where cn.PersonID == personID orderby cn.ContactDate select new { cn.ContactDate, cn.ContactDetails, cn.TimeSpentUnits, cn.IsCaseLog, cn.IsPreEnrollment, cn.PresentAtContact, cn.InsertDate, cn.InsertUser, cn.CaseNoteID, cn.ParentNote }; return caseNotes.ToList().CopyLinqToDataTable(); } }

    Read the article

  • Week in Geek: 4chan Falls Victim to DDoS Attack Edition

    - by Asian Angel
    This week we learned how to tweak the low battery action on a Windows 7 laptop, access an eBook collection anywhere in the world, “extend iPad battery life, batch resize photos, & sync massive music collections”, went on a reign of destruction with Snow Crusher, and had fun decorating our desktops with abstract icon collections. Photo by pasukaru76. Random Geek Links We have included extra news article goodness to help you catch up on any developments that you may have missed during the holiday break this past week. Note: The three 27C3 articles listed here represent three different presentations at the 27th Chaos Communication Congress hacker conference. 4chan victim of DDoS as FBI investigates role in PayPal attack Users of 4chan may have gotten a taste of their own medicine after the site was knocked offline by a DDoS attack from an unknown origin early Thursday morning. Report: FBI seizes server in probe of WikiLeaks attacks The FBI has seized a server in Texas as part of its hunt for the groups behind the pro-WikiLeaks denial-of-service attacks launched in December against PayPal, Visa, MasterCard, and others. Mozilla exposes older user-account database Mozilla has disabled 44,000 older user accounts for its Firefox add-ons site after a security researcher found part of a database of the account information on a publicly available server. Data breach affects 4.9 million Honda customers Japanese automaker Honda has put some 2.2 million customers in the United States on a security breach alert after a database containing information on the owners and their cars was hacked. Chinese Trojan discovered in Android games An Android-based Trojan called “Geinimi” has been discovered in the wild and the Trojan is capable of sending personal information to remote servers and exhibits botnet-like behavior. 27C3 presentation claims many mobiles vulnerable to SMS attacks According to security experts, an ‘SMS of death’ threatens to disable many current Sony Ericsson, Samsung, Motorola, Micromax and LG mobiles. 27C3: GSM cell phones even easier to tap Security researchers have demonstrated how open source software on a number of revamped, entry-level cell phones can decrypt and record mobile phone calls in the GSM network. 27C3: danger lurks in PDF documents Security researcher Julia Wolf has pointed out numerous, previously hardly known, security problems in connection with Adobe’s PDF standard. Critical update for WordPress A critical update has been made available for WordPress in the form of version 3.0.4. The update fixes a security bug in WordPress’s KSES library. McAfee Labs Predicts Geolocation, Mobile Devices and Apple Will Top the List of Targets for Emerging Threats in 2011 The list comprises 2010’s most buzzed about platforms and services, including Google’s Android, Apple’s iPhone, foursquare, Google TV and the Mac OS X platform, which are all expected to become major targets for cybercriminals. McAfee Labs also predicts that politically motivated attacks will be on the rise. Windows Phone 7 piracy materializes with FreeMarketplace A proof-of-concept application, FreeMarketplace, that allows any Windows Phone 7 application to be downloaded and installed free of charge has been developed. Empty email accounts, and some bad buzz for Hotmail In the past few days, a number of Hotmail users have been complaining about a rather disconcerting issue: their Hotmail accounts, some up to 10 years old, appear completely empty.  No emails, no folders, nothing, just what appears to be a new account. Reports: Nintendo warns of 3DS risk for kids Nintendo has reportedly issued a warning that the 3DS, its eagerly awaited glasses-free 3D portable gaming device, should not be used by children under 6 when the gadget is in 3D-viewing mode. Google eyes ‘cloaking’ as next antispam target Google plans to take a closer look at the practice of “cloaking,” or presenting one look to a Googlebot crawling one’s site while presenting another look to users. Facebook, Twitter stock trading drawing SEC eye? The high degree of investor interest in shares of hot Silicon Valley companies that aren’t yet publicly traded–like Facebook, Twitter, LinkedIn, and Zynga–may be leading to scrutiny from the U.S. Securities and Exchange Commission (SEC). Random TinyHacker Links Photo by jcraveiro. Exciting Software Set for Release in 2011 A few bloggers from great websites such as How-To Geek, Guiding Tech and 7 Tutorials took the time to sit down and talk about their software wishes for 2011. Take the time to read it and share… Wikileaks Infopr0n An infographic detailing the quest to plug WikiLeaks. The New York Times Guide to Mobile Apps A growing collection of all mobile app coverage by the New York Times as well as lists of favorite apps from Times writers. 7,000,000,000 (Video) A fascinating look at the world’s population via National Geographic Magazine. Super User Questions Check out the great answers to these hot questions from Super User. How to use a Personal computer as a Linux web server for development purposes? How to link processing power of old computers together? Free virtualization tool for testing suspicious files? Why do some actions not work with Remote Desktop? What is the simplest way to send a large batch of pictures to a distant friend or colleague? How-To Geek Weekly Article Recap Had a busy week and need to get caught up on your HTG reading? Then sit back and relax while enjoying these hot posts full of how-to roundup goodness. The 50 Best How-To Geek Windows Articles of 2010 The 20 Best How-To Geek Explainer Topics for 2010 The 20 Best How-To Geek Linux Articles of 2010 How to Search Just the Site You’re Viewing Using Google Search Ask the Readers: Backing Your Files Up – Local Storage versus the Cloud One Year Ago on How-To Geek Need more how-to geekiness for your weekend? Then look through this great batch of articles from one year ago that focus on dual-booting and O.S. installation goodness. Dual Boot Your Pre-Installed Windows 7 Computer with Vista Dual Boot Your Pre-Installed Windows 7 Computer with XP How To Setup a USB Flash Drive to Install Windows 7 Dual Boot Your Pre-Installed Windows 7 Computer with Ubuntu Easily Install Ubuntu Linux with Windows Using the Wubi Installer The Geek Note We hope that you and your families have had a terrific holiday break as everyone prepares to return to work and school this week. Remember to keep those great tips coming in to us at [email protected]! Photo by pjbeardsley. Latest Features How-To Geek ETC The 20 Best How-To Geek Linux Articles of 2010 The 50 Best How-To Geek Windows Articles of 2010 The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Tune Pop Enhances Android Music Notifications Another Busy Night in Gotham City Wallpaper Classic Super Mario Brothers Theme for Chrome and Iron Experimental Firefox Builds Put Tabs on the Title Bar (Available for Download) Android Trojan Found in the Wild Chaos, Panic, and Disorder Wallpaper

    Read the article

  • How to Reduce the Size of Your WinSXS Folder on Windows 7 or 8

    - by Chris Hoffman
    The WinSXS folder at C:\Windows\WinSXS is massive and continues to grow the longer you have Windows installed. This folder builds up unnecessary files over time, such as old versions of system components. This folder also contains files for uninstalled, disabled Windows components. Even if you don’t have a Windows component installed, it will be present in your WinSXS folder, taking up space. Why the WinSXS Folder Gets to Big The WinSXS folder contains all Windows system components. In fact, component files elsewhere in Windows are just links to files contained in the WinSXS folder. The WinSXS folder contains every operating system file. When Windows installs updates, it drops the new Windows component in the WinSXS folder and keeps the old component in the WinSXS folder. This means that every Windows Update you install increases the size of your WinSXS folder. This allows you to uninstall operating system updates from the Control Panel, which can be useful in the case of a buggy update — but it’s a feature that’s rarely used. Windows 7 dealt with this by including a feature that allows Windows to clean up old Windows update files after you install a new Windows service pack. The idea was that the system could be cleaned up regularly along with service packs. However, Windows 7 only saw one service pack — Service Pack 1 — released in 2010. Microsoft has no intention of launching another. This means that, for more than three years, Windows update uninstallation files have been building up on Windows 7 systems and couldn’t be easily removed. Clean Up Update Files To fix this problem, Microsoft recently backported a feature from Windows 8 to Windows 7. They did this without much fanfare — it was rolled out in a typical minor operating system update, the kind that don’t generally add new features. To clean up such update files, open the Disk Cleanup wizard (tap the Windows key, type “disk cleanup” into the Start menu, and press Enter). Click the Clean up System Files button, enable the Windows Update Cleanup option and click OK. If you’ve been using your Windows 7 system for a few years, you’ll likely be able to free several gigabytes of space. The next time you reboot after doing this, Windows will take a few minutes to clean up system files before you can log in and use your desktop. If you don’t see this feature in the Disk Cleanup window, you’re likely behind on your updates — install the latest updates from Windows Update. Windows 8 and 8.1 include built-in features that do this automatically. In fact, there’s a StartComponentCleanup scheduled task included with Windows that will automatically run in the background, cleaning up components 30 days after you’ve installed them. This 30-day period gives you time to uninstall an update if it causes problems. If you’d like to manually clean up updates, you can also use the Windows Update Cleanup option in the Disk Usage window, just as you can on Windows 7. (To open it, tap the Windows key, type “disk cleanup” to perform a search, and click the “Free up disk space by removing unnecessary files” shortcut that appears.) Windows 8.1 gives you more options, allowing you to forcibly remove all previous versions of uninstalled components, even ones that haven’t been around for more than 30 days. These commands must be run in an elevated Command Prompt — in other words, start the Command Prompt window as Administrator. For example, the following command will uninstall all previous versions of components without the scheduled task’s 30-day grace period: DISM.exe /online /Cleanup-Image /StartComponentCleanup The following command will remove files needed for uninstallation of service packs. You won’t be able to uninstall any currently installed service packs after running this command: DISM.exe /online /Cleanup-Image /SPSuperseded The following command will remove all old versions of every component. You won’t be able to uninstall any currently installed service packs or updates after this completes: DISM.exe /online /Cleanup-Image /StartComponentCleanup /ResetBase Remove Features on Demand Modern versions of Windows allow you to enable or disable Windows features on demand. You’ll find a list of these features in the Windows Features window you can access from the Control Panel. Even features you don’t have installed — that is, the features you see unchecked in this window — are stored on your hard drive in your WinSXS folder. If you choose to install them, they’ll be made available from your WinSXS folder. This means you won’t have to download anything or provide Windows installation media to install these features. However, these features take up space. While this shouldn’t matter on typical computers, users with extremely low amounts of storage or Windows server administrators who want to slim their Windows installs down to the smallest possible set of system files may want to get these files off their hard drives. For this reason, Windows 8 added a new option that allows you to remove these uninstalled components from the WinSXS folder entirely, freeing up space. If you choose to install the removed components later, Windows will prompt you to download the component files from Microsoft. To do this, open a Command Prompt window as Administrator. Use the following command to see the features available to you: DISM.exe /Online /English /Get-Features /Format:Table You’ll see a table of feature names and their states. To remove a feature from your system, you’d use the following command, replacing NAME with the name of the feature you want to remove. You can get the feature name you need from the table above. DISM.exe /Online /Disable-Feature /featurename:NAME /Remove If you run the /GetFeatures command again, you’ll now see that the feature has a status of “Disabled with Payload Removed” instead of just “Disabled.” That’s how you know it’s not taking up space on your computer’s hard drive. If you’re trying to slim down a Windows system as much as possible, be sure to check out our lists of ways to free up disk space on Windows and reduce the space used by system files.     

    Read the article

  • Introducing Oracle VM Server for SPARC

    - by Honglin Su
    As you are watching Oracle's Virtualization Strategy Webcast and exploring the great virtualization offerings of Oracle VM product line, I'd like to introduce Oracle VM Server for SPARC --  highly efficient, enterprise-class virtualization solution for Sun SPARC Enterprise Systems with Chip Multithreading (CMT) technology. Oracle VM Server for SPARC, previously called Sun Logical Domains, leverages the built-in SPARC hypervisor to subdivide supported platforms' resources (CPUs, memory, network, and storage) by creating partitions called logical (or virtual) domains. Each logical domain can run an independent operating system. Oracle VM Server for SPARC provides the flexibility to deploy multiple Oracle Solaris operating systems simultaneously on a single platform. Oracle VM Server also allows you to create up to 128 virtual servers on one system to take advantage of the massive thread scale offered by the CMT architecture. Oracle VM Server for SPARC integrates both the industry-leading CMT capability of the UltraSPARC T1, T2 and T2 Plus processors and the Oracle Solaris operating system. This combination helps to increase flexibility, isolate workload processing, and improve the potential for maximum server utilization. Oracle VM Server for SPARC delivers the following: Leading Price/Performance - The low-overhead architecture provides scalable performance under increasing workloads without additional license cost. This enables you to meet the most aggressive price/performance requirement Advanced RAS - Each logical domain is an entirely independent virtual machine with its own OS. It supports virtual disk mutipathing and failover as well as faster network failover with link-based IP multipathing (IPMP) support. Moreover, it's fully integrated with Solaris FMA (Fault Management Architecture), which enables predictive self healing. CPU Dynamic Resource Management (DRM) - Enable your resource management policy and domain workload to trigger the automatic addition and removal of CPUs. This ability helps you to better align with your IT and business priorities. Enhanced Domain Migrations - Perform domain migrations interactively and non-interactively to bring more flexibility to the management of your virtualized environment. Improve active domain migration performance by compressing memory transfers and taking advantage of cryptographic acceleration hardware. These methods provide faster migration for load balancing, power saving, and planned maintenance. Dynamic Crypto Control - Dynamically add and remove cryptographic units (aka MAU) to and from active domains. Also, migrate active domains that have cryptographic units. Physical-to-virtual (P2V) Conversion - Quickly convert an existing SPARC server running the Oracle Solaris 8, 9 or 10 OS into a virtualized Oracle Solaris 10 image. Use this image to facilitate OS migration into the virtualized environment. Virtual I/O Dynamic Reconfiguration (DR) - Add and remove virtual I/O services and devices without needing to reboot the system. CPU Power Management - Implement power saving by disabling each core on a Sun UltraSPARC T2 or T2 Plus processor that has all of its CPU threads idle. Advanced Network Configuration - Configure the following network features to obtain more flexible network configurations, higher performance, and scalability: Jumbo frames, VLANs, virtual switches for link aggregations, and network interface unit (NIU) hybrid I/O. Official Certification Based On Real-World Testing - Use Oracle VM Server for SPARC with the most sophisticated enterprise workloads under real-world conditions, including Oracle Real Application Clusters (RAC). Affordable, Full-Stack Enterprise Class Support - Obtain worldwide support from Oracle for the entire virtualization environment and workloads together. The support covers hardware, firmware, OS, virtualization, and the software stack. SPARC Server Virtualization Oracle offers a full portfolio of virtualization solutions to address your needs. SPARC is the leading platform to have the hard partitioning capability that provides the physical isolation needed to run independent operating systems. Many customers have already used Oracle Solaris Containers for application isolation. Oracle VM Server for SPARC provides another important feature with OS isolation. This gives you the flexibility to deploy multiple operating systems simultaneously on a single Sun SPARC T-Series server with finer granularity for computing resources.  For SPARC CMT processors, the natural level of granularity is an execution thread, not a time-sliced microsecond of execution resources. Each CPU thread can be treated as an independent virtual processor. The scheduler is naturally built into the CPU for lower overhead and higher performance. Your organizations can couple Oracle Solaris Containers and Oracle VM Server for SPARC with the breakthrough space and energy savings afforded by Sun SPARC Enterprise systems with CMT technology to deliver a more agile, responsive, and low-cost environment. Management with Oracle Enterprise Manager Ops Center The Oracle Enterprise Manager Ops Center Virtualization Management Pack provides full lifecycle management of virtual guests, including Oracle VM Server for SPARC and Oracle Solaris Containers. It helps you streamline operations and reduce downtime. Together, the Virtualization Management Pack and the Ops Center Provisioning and Patch Automation Pack provide an end-to-end management solution for physical and virtual systems through a single web-based console. This solution automates the lifecycle management of physical and virtual systems and is the most effective systems management solution for Oracle's Sun infrastructure. Ease of Deployment with Configuration Assistant The Oracle VM Server for SPARC Configuration Assistant can help you easily create logical domains. After gathering the configuration data, the Configuration Assistant determines the best way to create a deployment to suit your requirements. The Configuration Assistant is available as both a graphical user interface (GUI) and terminal-based tool. Oracle Solaris Cluster HA Support The Oracle Solaris Cluster HA for Oracle VM Server for SPARC data service provides a mechanism for orderly startup and shutdown, fault monitoring and automatic failover of the Oracle VM Server guest domain service. In addition, applications that run on a logical domain, as well as its resources and dependencies can be controlled and managed independently. These are managed as if they were running in a classical Solaris Cluster hardware node. Supported Systems Oracle VM Server for SPARC is supported on all Sun SPARC Enterprise Systems with CMT technology. UltraSPARC T2 Plus Systems ·   Sun SPARC Enterprise T5140 Server ·   Sun SPARC Enterprise T5240 Server ·   Sun SPARC Enterprise T5440 Server ·   Sun Netra T5440 Server ·   Sun Blade T6340 Server Module ·   Sun Netra T6340 Server Module UltraSPARC T2 Systems ·   Sun SPARC Enterprise T5120 Server ·   Sun SPARC Enterprise T5220 Server ·   Sun Netra T5220 Server ·   Sun Blade T6320 Server Module ·   Sun Netra CP3260 ATCA Blade Server Note that UltraSPARC T1 systems are supported on earlier versions of the software.Sun SPARC Enterprise Systems with CMT technology come with the right to use (RTU) of Oracle VM Server, and the software is pre-installed. If you have the systems under warranty or with support, you can download the software and system firmware as well as their updates. Oracle Premier Support for Systems provides fully-integrated support for your server hardware, firmware, OS, and virtualization software. Visit oracle.com/support for information about Oracle's support offerings for Sun systems. For more information about Oracle's virtualization offerings, visit oracle.com/virtualization.

    Read the article

  • Getting Started with ASP.NET Membership, Profile and RoleManager

    - by Ben Griswold
    A new ASP.NET MVC project includes preconfigured Membership, Profile and RoleManager providers right out of the box.  Try it yourself – create a ASP.NET MVC application, crack open the web.config file and have a look.  First, you’ll find the ApplicationServices database connection: <connectionStrings>   <add name="ApplicationServices"        connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|aspnetdb.mdf;User Instance=true"        providerName="System.Data.SqlClient"/> </connectionStrings>   Notice the connection string is referencing the aspnetdb.mdf database hosted by SQL Express and it’s using integrated security so it’ll just work for you without having to call out a specific database login or anything. Scroll down the file a bit and you’ll find each of the three noted sections: <membership>   <providers>     <clear/>     <add name="AspNetSqlMembershipProvider"          type="System.Web.Security.SqlMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"          connectionStringName="ApplicationServices"          enablePasswordRetrieval="false"          enablePasswordReset="true"          requiresQuestionAndAnswer="false"          requiresUniqueEmail="false"          passwordFormat="Hashed"          maxInvalidPasswordAttempts="5"          minRequiredPasswordLength="6"          minRequiredNonalphanumericCharacters="0"          passwordAttemptWindow="10"          passwordStrengthRegularExpression=""          applicationName="/"             />   </providers> </membership>   <profile>   <providers>     <clear/>     <add name="AspNetSqlProfileProvider"          type="System.Web.Profile.SqlProfileProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"          connectionStringName="ApplicationServices"          applicationName="/"             />   </providers> </profile>   <roleManager enabled="false">   <providers>     <clear />     <add connectionStringName="ApplicationServices" applicationName="/" name="AspNetSqlRoleProvider" type="System.Web.Security.SqlRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />     <add applicationName="/" name="AspNetWindowsTokenRoleProvider" type="System.Web.Security.WindowsTokenRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />   </providers> </roleManager> Really. It’s all there. Still don’t believe me.  Run the application, walk through the registration process and finally login and logout.  Completely functional – and you didn’t have to do a thing! What else?  Well, you can manage your users via the Configuration Manager which is hiding in Visual Studio behind Projects > ASP.NET Configuration. The ASP.NET Web Site Administration Tool isn’t MVC-specific (neither is the Membership, Profile or RoleManager stuff) but it’s neat and I hardly ever see anyone using it.  Here you can set up and edit users, roles, and set access permissions for your site. You can manage application settings, establish your SMTP settings, configure debugging and tracing, define default error page and even take your application offline.  The UI is rather plain-Jane but it works great. And here’s the best of all.  Let’s say you, like most of us, don’t want to run your application on top of the aspnetdb.mdf database.  Let’s suppose you want to use your own database and you’d like to add the membership stuff to it.  Well, that’s easy enough. Take a look inside your [drive:]\%windir%\Microsoft.Net\Framework\v2.0.50727\ folder.  Here you’ll find a bunch of files.  If you were to run the InstallCommon.sql, InstallMembership.sql, InstallRoles.sql and InstallProfile.sql files against the database of your choices, you’d be installing the same membership, profile and role artifacts which are found in the aspnet.db to your own database.  Too much trouble?  Okay. Run [drive:]\%windir%\Microsoft.Net\Framework\v2.0.50727\aspnet_regsql.exe from the command line instead.  This will launch the ASP.NET SQL Server Setup Wizard which walks you through the installation of those same database objects into the new or existing database of your choice. You may not always have the luxury of using this tool on your destination server, but you should use it whenever you can.  Last tip: don’t forget to update the ApplicationServices connectionstring to point to your custom database after the setup is complete. At the risk of sounding like a smarty, everything I’ve mentioned in this post has been around for quite a while. The thing is that not everyone has had the opportunity to use it.  And it makes sense. I know I’ve worked on projects which used custom membership services.  Why bother with the out-of-the-box stuff, right?   And the .NET framework is so massive, who can know it all. Well, eventually you might have a chance to architect your own solution using any implementation you’d like or you will have the time to play around with another aspect of the framework.  When you do, think back to this post.

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37  | Next Page >