Search Results

Search found 11268 results on 451 pages for 'shweta simply'.

Page 41/451 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Cloud – the forecast is improving

    - by Rob Farley
    There is a lot of discussion about “the cloud”, and how that affects people’s data stories. Today the discussion enters the realm of T-SQL Tuesday, hosted this month by Jorge Segarra. Over the years, companies have invested a lot in making sure that their data is good, and I mean every aspect of it – the quality of it, the security of it, the performance of it, and more. Experts such as those of us at LobsterPot Solutions have helped these companies with this, and continue to work with clients to make sure that data is a strong part of their business, not an oversight. Whether business intelligence systems are being utilised or not, every business needs to be able to rely on its data, and have the confidence in it. Data should be a foundation upon which a business is built. In the past, data had been stored in paper-based systems. Filing cabinets stored vital information. Today, people have server rooms with storage of various kinds, recognising that filing cabinets don’t necessarily scale particularly well. It’s easy to ‘lose’ data in a filing cabinet, when you have people who need to make sure that the sheets of paper are in the right spot, and that you know how things are stored. Databases help solve that problem, but still the idea of a large filing cabinet continues, it just doesn’t involve paper. If something happens to the physical ‘filing cabinet’, then the problems are larger still. Then the data itself is under threat. Many clients have generators in case the power goes out, redundant cables in case the connectivity dies, and spare servers in other buildings just in case they’re required. But still they’re maintaining filing cabinets. You see, people like filing cabinets. There’s something to be said for having your data ‘close’. Even if the data is not in readable form, living as bits on a disk somewhere, the idea that its home is ‘in the building’ is comforting to many people. They simply don’t want to move their data anywhere else. The cloud offers an alternative to this, and the human element is an obstacle. By leveraging the cloud, companies can have someone else look after their filing cabinet. A lot of people really don’t like the idea of this, partly because the administrators of the data, those people who could potentially log in with escalated rights and see more than they should be allowed to, who need to be trusted to respond if there’s a problem, are now a faceless entity in the cloud. But this doesn’t mean that the cloud is bad – this is simply a concern that some people may have. In new functionality that’s on its way, we see other hybrid mechanisms that mean that people can leverage parts of the cloud with less fear. Companies can use cloud storage to hold their backup data, for example, backups that have been encrypted and are therefore not able to be read by anyone (including administrators) who don’t have the right password. Companies can have a database instance that runs locally, but which has its data files in the cloud, complete with Transparent Data Encryption if needed. There can be a higher level of control, making the change easier to accept. Hybrid options allow people who have had fears (potentially very justifiable) to take a new look at the cloud, and to start embracing some of the benefits of the cloud (such as letting someone else take care of storage, high availability, and more) without losing the feeling of the data being close. @rob_farley

    Read the article

  • Career-Defining Moments

    - by Robz / Fervent Coder
    Originally posted on: http://geekswithblogs.net/robz/archive/2013/06/25/career-defining-moments.aspx Fear holds us back from many things. A little fear is healthy, but don’t let it overwhelm you into missing opportunities. In every career there is a moment when you can either step forward and define yourself, or sit down and regret it later. Why do we hold back: is it fear, constraints, family concerns, or that we simply can't do it? I think in many cases it comes to the unknown, and we are good at fearing the unknown. Some people hold back because they are fearful of what they don’t know. Some hold back because they are fearful of learning new things. Some hold back simply because to take on a new challenge it means they have to give something else up. The phrase sometimes used is “It’s the devil you know versus the one you don’t.” That fear sometimes allows us to miss great opportunities. In many people’s case it is the opportunity to go into business for yourself, to start something that never existed. Most hold back hear for a fear of failing. We’ve all heard the phrase “What would you do if you knew you couldn’t fail?”, which is intended to get people to think about the opportunities they might create. A better term I heard recently on the Ruby Rogues podcast was “What would be worth doing even if you knew you were going to fail?” I think that wording suits the intent better. If you knew (or thought) going in that you were going to fail and you didn’t care, it would open you up to the possibility of paying more attention to the journey and not the outcome. In my case it is a fear of acceptance. I am fearful that I may not learn what I need to learn or may not do a good enough job to be accepted. At the same time that fear drives me and makes me want to leap forward. Some folks would define this as “The Flinch”. I’m learning Ruby and Puppet right now. I have limited experience with both, limited to the degree it scares me some that I don’t know much about either. Okay, it scares me quite a bit! Some people’s defining moment might be going to work for Microsoft. All of you who know me know that I am in love with automation, from low-tech to high-tech automation. So for me, my “mecca” is a little different in that regard. Awhile back I sat down and defined where I wanted my career to go and it had to do more with DevOps, defined as applying developer practices to system administration operations (I could not find this definition when I searched). It’s an area that interests me and why I really want to expand chocolatey into something more awesome. I want to see Windows be as automatable and awesome as other operating systems that are out there. Back to the career-defining moment. Sometimes these moments only come once in a lifetime. The key is to recognize when you are in one of these moments and step back to evaluate it before choosing to dive in head first. So I am about to embark on what I define as one of these “moments.”  On July 1st I will be joining Puppet Labs and working to help make the Windows automation experience rock solid! I’m both scared and excited about the opportunity!

    Read the article

  • Can someone explain the true landscape of Rails vs PHP deployment, particularly within the context of Reseller-based web hosting (e.g., Hostgator)?

    - by rcd
    Currently, I have a reseller account with the company HostGator. I design websites, which up until now have occasionally been wrapped in Wordpress CMSs and the like (PHP applications). I then sell hosting (of the site I've designed) to the client, which is pretty simple, in that I can simply click a button and add a new shared hosting account/site with whatever settings I want. Furthermore, I then utilize WHMCS to automate billing and account management. It's a nice package and pretty simple. I pay something like $25 a month, and can sell a hundred accounts under this (because my clients bandwidth requirements are low). Now I am finding the need to develop more customized applications, including a minimalist CMS and several proprietary things. I soon anticipate developing these apps for clients as well. Thus, I've spent the past few months learning Rails, and it's coming along well now. The thing that has nagged at me all along, though, is the deployment issue. I can't wrap my brain around it. It seems like all of the popular options (Heroku, etc) have nice automation with git and are set up in the "Rails Way". I get that (sort of). But it's terribly expensive... a single dyno, a helper, and the cheapest database (which they say is mainly suitable for testing) that isn't limited to 5MB runs $51. This is for ONE app!!! Throw in a "production" DB and you're over $200. This is like... the same prices as getting a server somewhere, right? Meanwhile, going back to what I guess is a "traditional" hosting environment with Hostgator, their server only has Ruby 1.8.7 and Rails 2.3.5... No Rails 3. AND, no Passenger (not that I really understand the difference in CGI or mod_rails or whatever, but they say Passenger is the simplest). So I'm to understand that if I build an app in Rails 3, it won't run at all on this host? But damn, I already have these accounts under my reseller account there, all running static html and/or PHP stuff, right? So what now? How do I get all of this under one simple (and affordable) roof? Forgive my ignorance, but I just don't get it. Managing a VPS is cool and all, but entails learning server admin stuff and security... And it's expensive. I get that a shared and/or reseller "server-based" (forgive the terminology) may be inadequate for large-scale apps that use a lot of bandwidth... But what about for those of us who are building real (but small and low bandwidth) apps (with Rails) and who want to deploy them simply, cheaply, using the same conceptual approach as PHP? Even after learning all of this Ruby and Rails stuff for months, I'm questioning whether it's worth it when it comes to deployment. I want to build a small app, upload it to my home directory on a shared server account, and just make it run. Why should that be so hard? Am I just choosing the wrong language/framework? Forgive my ignorance in the subject; these questions are not rhetorical; just trying to learn here. So: 1) I'd appreciate if someone could give me a good rundown of how to understand deployment in Rails vs. PHP. 2) I'd appreciate if someone could address my issue with running a hosting/web business around reseller hosting (Hostgator) while also being able to host Rails apps. Can it be done? And how can a company like Hostgator completely ignore what's current in Rails/Ruby? Thanks.

    Read the article

  • A* algorithm very slow

    - by Amaranth
    I have an programming a RTS game (I use XNA with C#). The pathfinding is working fine, except that when it has a lot of node to search in, there is a lag period of one or two seconds, it happens mainly when there is no path to the target destination, since it that situation there is more nodes to explore. I have the same problem when the path is shorter but selected more than 3 units (can't take the same path since the selected units can be in different part of the map). private List<NodeInfo> FindPath(Unit u, NodeInfo start, NodeInfo end) { Map map = GameInfo.GetInstance().GameMap; _nearestToTarget = start; start.MoveCost = 0; Vector2 endPosition = map.getTileByPos(end.X, end.Y).Position; //getTileByPos simply gets the tile in a 2D array with the X and Y indexes start.EstimatedRemainingCost = (int)(endPosition - map.getTileByPos(start.X, start.Y).Position).Length(); start.Parent = null; List<NodeInfo> openedNodes = new List<NodeInfo>(); ; List<NodeInfo> closedNodes = new List<NodeInfo>(); Point[] movements = GetMovements(u.UnitType); openedNodes.Add(start); while (!closedNodes.Contains(end) && openedNodes.Count > 0) { //Loop in nodes to find lowest cost NodeInfo currentNode = FindLowestCostOpenedNode(openedNodes); openedNodes.Remove(currentNode); closedNodes.Add(currentNode); Vector2 previousMouvement; if (currentNode.Parent == null) { previousMouvement = ConvertRotationToDirectionVector(u.Rotation); } else { previousMouvement = map.getTileByPos(currentNode.X, currentNode.Y).Position - map.getTileByPos(currentNode.Parent.X, currentNode.Parent.Y).Position; previousMouvement.Normalize(); } //For each neighbor foreach (Point movement in movements) { Point exploredGridPos = new Point(currentNode.X + movement.X, currentNode.Y + movement.Y); //Checks if valid move and checks if not if closed nodes list if (ValidNavigableNode(u.UnitType, new Point(currentNode.X, currentNode.Y), exploredGridPos) && !closedNodes.Contains(_gridMap[exploredGridPos.Y, exploredGridPos.X])) { NodeInfo exploredNode = _gridMap[exploredGridPos.Y, exploredGridPos.X]; Tile.TileType exploredTerrain = map.getTileByPos(exploredGridPos.X, exploredGridPos.Y).TerrainType; if(openedNodes.Contains(exploredNode)) { int newCost = currentNode.MoveCost + GetMoveCost(previousMouvement, movement, exploredTerrain); if (newCost < exploredNode.MoveCost) { exploredNode.Parent = currentNode; exploredNode.MoveCost = newCost; //Find nearest tile to the target (in case doesn't find path to target) //Only compares the node to the current nearest FindNearest(exploredNode); } } else { exploredNode.Parent = currentNode; exploredNode.MoveCost = currentNode.MoveCost + GetMoveCost(previousMouvement, movement, exploredTerrain); Vector2 exploredNodeWorldPos = map.getTileByPos(exploredGridPos.X, exploredGridPos.Y).Position; exploredNode.EstimatedRemainingCost = (int)(endPosition - exploredNodeWorldPos).Length(); //Find nearest tile to the target (in case doesn't find path to target) //Only compares the node to the current nearest FindNearest(exploredNode); openedNodes.Add(exploredNode); } } } } return closedNodes; } After that, I simply check if the end node is contained in the returned nodes. If so, I add the end node and each parent until I reach the start. If not, I add the nearestToTarget and each parent until I reach the start. I added a condition before calling FindPath so that only one unit can call a find path each frame (60 frame per second), but it makes no difference. I thought maybe I could solve this by allowing the find path to run in background while the game continues to run correctly, even if it takes a few frame (it is currently sequential sonce it is called in the update() of the unit if there's a target location but no path), but I don't really know how... I also though about sorting my opened nodes list by cost so I don't have to loop them, but I don't know if that would have an effect on the performance... Would there be other solutions? P.S. In the code, when I get the Move Cost, I check if the unit has to turn to perform the move, and the terrain type, nothing hard to do.

    Read the article

  • Create Custom Speech Bubbles in Silverlight.

    - by mbcrump
    I had a reader email me the following question: “How do you create Speech Bubbles in Silverlight/WPF without adding any extra .dlls? Right off the bat, I know at least two ways to create the speech bubbles that look just like the ones in comic books. Using the Callout Shapes included with Blend 4. Using the free 3rd party control named FreeBubbles (I used this before Blend 4). Unfortunately, we cannot use either of these as they will both add extra .dll’s to the project. So why wouldn’t you want to use one of those? I can think of a few reasons: You do not want to increase the size of your .XAP by including extra .dll’s. You do not have Expression Blend or the license to the use the .dll’s. You want a custom Speech Bubble that is not included in the four “Callout” Controls with Blend. Instead of using one of these methods, we will create a Speech Bubble in Blend 4 using Path element and a TextBlock. Before we get started, lets look at the Callout Shapes included with Blend 4. Using Blend 4 you can simply drag/drop these controls onto your Silverlight application and you are ready to go. We can create all of these Speech Bubbles and even some of the modern bubbles used in recent comic books. Lets get started. Start up Expression Blend 4 and select the Pen Tool. On the Art Board, start connecting the dots like I did below. You can add a color if you wish. …keep going …complete Let’s go ahead and add some text to the Speech Bubble. Drag a TextBlock from the Panel and put it directly inside the Speech Bubble. Go ahead and set the TextAlignment to Center for the TextBlock. and give it some text. At this point, you could go ahead and create a user control if you want to reuse the Speech Bubble you created. Select both the Path and the TextBlock by clicking then while holding down CTRL and then Right Click them. Select Make Into User Control. Give it a name and then Build your project. Lets create another one using the Ellipse for the older comic book style of Speech Bubbles. Drag an Ellipse to the Artboard and give it a color. Now, grab the Pen and drag a triangle like I did below. Simply drag it over a corner of the Ellipse. Select Combine then Unite and you will have a Path. At this point, you can go ahead and add a TextBlock like we did earlier. Lets go ahead and create a rounded rectangle one by adding a Rectangle to the Artboard. Go ahead and set the RadiuX and RadiusY to 25 to give it rounded edges. Let’s create another path and drag it right on top of our rounded rectangle like we did earlier. …looking good Select Combine then Unite and you will have a Path. At this point, you can go ahead and add a TextBlock like we did earlier. So let’s look at what we’ve created today using the path element and TextBlock. As you can tell, it required more work but meets the requirements. This was actually fun to do and I encourage anyone that visits my blog to send in request like this.  Subscribe to my feed

    Read the article

  • Gomoku array-based AI-algorithm?

    - by Lasse V. Karlsen
    Way way back (think 20+ years) I encountered a Gomoku game source code in a magazine that I typed in for my computer and had a lot of fun with. The game was difficult to win against, but the core algorithm for the computer AI was really simply and didn't account for a lot of code. I wonder if anyone knows this algorithm and has some links to some source or theory about it. The things I remember was that it basically allocated an array that covered the entire board. Then, whenever I, or it, placed a piece, it would add a number of weights to all locations on the board that the piece would possibly impact. For instance (note that the weights are definitely wrong as I don't remember those): 1 1 1 2 2 2 3 3 3 444 1234X4321 3 3 3 2 2 2 1 1 1 Then it simply scanned the array for an open location with the lowest or highest value. Things I'm fuzzy on: Perhaps it had two arrays, one for me and one for itself and there was a min/max weighting? There might've been more to the algorithm, but at its core it was basically an array and weighted numbers Does this ring a bell with anyone at all? Anyone got anything that would help?

    Read the article

  • ASP.NET MVC Routing - Redirect to aspx?

    - by bmoeskau
    This seems like it should be easy, but for some reason I'm having no luck. I'm migrating an existing WebForms app to MVC, so I need to keep the root of the site pointing to my existing aspx pages for now and only apply routing to named routes. Here's what I have: public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.IgnoreRoute("{resource}.aspx/{*pathInfo}"); RouteTable.Routes.Add( "Root", new Route("", new DefaultRouteHandler()) ); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Calendar2", action = "Index", id = "" } // Parameter defaults ); } So aspx pages should be ignored, and the default root url should be handled by this handler: public class DefaultRouteHandler : IRouteHandler { public IHttpHandler GetHttpHandler(RequestContext requestContext) { return System.Web.Compilation.BuildManager.CreateInstanceFromVirtualPath( "~/Dashboard/default.aspx", typeof(Page)) as IHttpHandler; } } This seems to work OK, but the resulting YPOD gives me this: Multiple controls with the same ID '__Page' were found. Trace requires that controls have unique IDs. which seems to imply that the page is somehow getting rendered twice. If I simply type in the url to my dashboard page directly it works fine (no routing, no error). I have no idea why the handler code would be doing anything differently. Bottom line -- I'd like to simply redirect the root url path to an aspx of my choosing -- can anyone shed some light?

    Read the article

  • .NET C#: WebBrowser control Navigate() does not load targeted URL

    - by Dave
    Hey guys, I'm trying to programmatically load a web page via the WebBrowser control with the intent of testing the page & it's JavaScript functions. Basically, I want to compare the HTML & JavaScript run through this control against a known output to ascertain whether there is a problem. However, I'm having trouble simply creating and navigating the WebBrowser control. The code below is intended to load the HtmlDocument into the WebBrowser.Document property: WebBrowser wb = new WebBrowser(); wb.AllowNavigation = true; wb.Navigate("http://www.google.com/"); When examining the web browser's state via Intellisense after Navigate() runs, the WebBrowser.ReadyState is 'Uninitialized', WebBrowser.Document = null, and it overall appears completely unaffected by my call. On a contextual note, I'm running this control outside of a Windows form object: I do not need to load a window or actually look at the page. Requirements dictate the need to simply execute the page's JavaScript and examine the resultant HTML. Any suggestions are greatly appreciated, thanks!

    Read the article

  • Streaming to the Android MediaPlayer

    - by Rob Szumlakowski
    Hi. I'm trying to write a light-weight HTTP server in my app to feed dynamically generated MP3 data to the built-in Android MediaPlayer. I am not permitted to store my content on the SD card. My input data is essentially of an infinite length. I tell MediaPlayer that its data source should basically be something like "http://localhost/myfile.mp3". I've a simple server set up that waits for MediaPlayer to make this request. However, MediaPlayer isn't very cooperative. At first, it makes an HTTP GET and tries to grab the whole file. It times out if we try and simply dump data into the socket so we tried using the HTTP Range header to write data in chunks. MediaPlayer doesn't like this and doesn't keep requesting the subsequent chunks. Has anyone had any success streaming data directly into MediaPlayer? Do I need to implement an RTSP or Shoutcast server instead? Am I simply missing a critical HTTP header? What strategy should I use here? Rob Szumlakowski

    Read the article

  • Need advice or pointers on Release Management Strategies

    - by Murray
    I look after an internal web based (Java, JSP, Mediasurface, etc.) system that is in constant use (24/5). Users raise tickets for enhancements, bug fixes and other business changes. These issues are signed off individually and assigned to one of three or four developers. Once the issue is complete it is built and the code only committed to SVN. The changed files (templates, html, classes, jsp) are then copied to a dev server and committed to a different repository from where they are checked out to the UAT server for testing. (this often requires the Tomcat service to be restarted and occasionally the Mediasurface service as well). The users then test and either reject or approve the release. If approved the edited files are checked out to the Live server and the same process as with UAT undertaken. If rejected the developer makes the relevant changes and starts the release process again. This is all done manually without much control. Where different developers are working on similar files, changes sometimes get overwritten by builds done on out of sync code in other cases changes in UAT are moved to live in error as they are mixed up in files associated with a signed off release. I would like to move this to a more controlled and automated process where all source code and output files are held in SVN and releases to Dev, UAT and Live managed by a CI system (We have TeamCity in house for our .NET applications). My question is on how to manage the releases of multiple changes where some will be signed off and moved on and others rejected and returned to the developer. The changes may be on overlapping files and simply merging each release in to a Release Branch means that the rejected changes would have to be backed out of the branch. Is there a way to manage this using SVN and CI or will I simply have to live with the current system.

    Read the article

  • Rotate a Swing JLabel

    - by Johannes Rössel
    I am currently trying to implement a Swing component, inheriting from JLabel which should simply represent a label that can be oriented vertically. Beginning with this: public class RotatedLabel extends JLabel { public enum Direction { HORIZONTAL, VERTICAL_UP, VERTICAL_DOWN } private Direction direction; I thought it's be a nice idea to just alter the results from getPreferredSize(): @Override public Dimension getPreferredSize() { // swap size for vertical alignments switch (getDirection()) { case VERTICAL_UP: case VERTICAL_DOWN: return new Dimension(super.getPreferredSize().height, super .getPreferredSize().width); default: return super.getPreferredSize(); } } and then simply transform the Graphics object before I offload painting to the original JLabel: @Override protected void paintComponent(Graphics g) { Graphics2D gr = (Graphics2D) g.create(); switch (getDirection()) { case VERTICAL_UP: gr.translate0, getPreferredSize().getHeight()); gr.transform(AffineTransform.getQuadrantRotateInstance(-1)); break; case VERTICAL_DOWN: // TODO break; default: } super.paintComponent(gr); } } It seems to work—somehow—in that the text is now displayed vertically. However, placement and size are off: Actually, the width of the background (orange in this case) is identical with the height of the surrounding JFrame which is ... not quite what I had in mind. Any ideas how to solve that in a proper way? Is delegating rendering to superclasses even encouraged?

    Read the article

  • ASP.NET MVC Session Expiration

    - by Andrew Flanagan
    We have an internal ASP.NET MVC application that requires a logon. Log on works great and does what's expected. We have a session expiration of 15 minutes. After sitting on a single page for that period of time, the user has lost the session. If they attempt to refresh the current page or browse to another, they will get a log on page. We keep their request stored so once they've logged in they can continue on to the page that they've requested. This works great. However, my issue is that on some pages there are AJAX calls. For example, they may fill out part of a form, wander off and let their session expire. When they come back, the screen is still displayed. If they simply fill in a box (which will make an AJAX call) the AJAX call will return the Logon page (inside of whatever div the AJAX should have simply returned the actual results). This looks horrible. I think that the solution is to make the page itself expire (so that when a session is terminated, they automatically are returned to the logon screen without any action by them). However, I'm wondering if there are opinions/ideas on how best to implement this specifically in regards to best practices in ASP.NET MVC.

    Read the article

  • WMD markdown editor - HTML to Markdown conversion

    - by gg1
    I am using wmd markdown editor on a project and had a question: When I post the form containing the markdown text area, it (as expected) posts html to the server. However, say upon server-side validation something fails and I need to send the user back to edit their entry, is there anyway to refill the textarea with just the markdown and not the html? Since as I have it set up, the server only has access to the post data (which is in the form of html) so I can't seem to think of a way to do this. Any ideas? Preferably a non-javascript based solution. Update: I found an html to markdown converter called markdownify. I guess this might be the best solution for displaying the markdown back to the user...any better alternatives are welcome! Update 2: I found this post on SO and I guess there is an option to send the data to the server as markdown instead of html. Are there any downsides to simply storing the data as markdown in the database? What about displaying it back to the user (outside of an editor)? Maybe it would be best to post both versions (html AND markdown) to the server... SOLVED: I can simply use php markdown to convert the markdown to html serverside.

    Read the article

  • Jquery and XML - How to add the value of nodes

    - by Matias
    Hi Experts, This may be a simple question though can´t figure out how to do it. I am parsing an XML with Jquery Ajax. It contains dates and rates The XML looks something like <rate> <date>Today</date> <price>66</price> </rate> <rate> <date>Tomorrow</date> <price>99</price> </rate> I simply want to figure out how to calculate the total price of both days Today and Tomorrow. Thought that by using Javascript Number it will simply return the total value of the nodes.. $(xml).find("rate").each(function() { $(this).find("price").each(function() { $("#TOTALPRICE").append(Number($(this).text())); } } //output is: 6699 However, it´s just concatenating the values both not adding them. //output is: 6699 I greatly appreciate your help !! Thanks

    Read the article

  • Desktop Notifications, aka Internal Alert System

    - by Refracted Paladin
    It has become apparent that where I work needs, internally, a "notification system". The issue being that we are very spread out throughout multiple buildings and the bulk of the work force regularly keeps there email closed for hours at a time. I need to create a simple way to be able to push out a message and have it "pop up" on everyones computer(or a single computer). My first thought was to write a windows service that calls a winform/wpf app that resides on each computer that simply pops up with the message. Not sure how viable an idea that is but this is just brain-storming. A different route, I thought, could be an app that resides in the systray on each computer that polls a db table and using the Query Notifications could pop up a message each time a new row is added. Then simply create an insanely basic app for writing a row to that table. So, what I am asking is if any one else has walked this path. If so, how? What things did you take into consideration? Are either of my ideas valid starting points or are "egg and my face in perfect alignment"? Is there a different way that is even simpler? Thanks

    Read the article

  • WPF designer gives exception when databinding a label to a checkbox

    - by John
    I'm sure it's something stupid, but I'm playing around with databinding. I have a checkbox and a label on a form. What I'm trying to do is simply bind the Content of the label to the checkbox's IsChecked value. What I've done runs fine (no compilation errors and acts as expected), but if I touch the label in the XAML, the designer trows an exception: System.NullReferenceException Object reference not set to an instance of an object. at MS.Internal.Designer.PropertyEditing.Editors.MarkupExtensionInlineEditorControl.BuildBindingString(Boolean modeSupported, PropertyEntry propertyEntry) at <Window x:Class="UnitTestHelper.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:FileSysCtls="clr-namespace:WPFFileSystemUserControls;assembly=WPFFileSystemUserControls" xmlns:HelperClasses="clr-namespace:UnitTestHelper" Title="MainWindow" Height="406" Width="531"> <Window.Resources> <HelperClasses:ThreestateToBinary x:Key="CheckConverter" /> </Window.Resources> <Grid Height="367" Width="509"> <CheckBox Content="Step into subfolders" Height="16" HorizontalAlignment="Left" Margin="17,254,0,0" Name="chkSubfolders" VerticalAlignment="Top" Width="130" IsThreeState="False" /> <Label Height="28" HorizontalAlignment="Left" Margin="376,254,0,0" Name="lblStepResult" VerticalAlignment="Top" Width="120" IsEnabled="True" Content="{Binding IsChecked, ElementName=chkSubfolders, Mode=OneWay, UpdateSourceTrigger=PropertyChanged, Converter={StaticResource CheckConverter}}" /> </Grid> The ThreeStateToBinary class is as follows: class ThreestateToBinary : IValueConverter { #region IValueConverter Members public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { if ((bool)value) return "Checked"; else return "Not checked"; //throw new NotImplementedException(); } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { return ((string)value == "Checked"); //throw new NotImplementedException(); } #endregion } Quite honestly, I'm playing around with it at this point. It was originally simpler (not using the ValueConverter) but was displaying similar behavior when I simply had the content set to: Content="{Binding IsChecked, ElementName=chkSubfolders, UpdateSourceTrigger=PropertyChanged}" Any ideas? Thanks, John

    Read the article

  • Cookies not present after using XMLHttpRequest

    - by Joe B
    I'm trying to make a bookmarklet to download videos off of YouTube, but I've come across a little problem. To detect the highest quality video available, I use a sort of brute force method, in which I make requests using the XMLHttpRequest object until a 404 isn't returned (I can't do it until a 200 ok is returned because YouTube redirects to a different server if the video is available, and the cross-domain policy won't allow me to access any of that data). Once a working URL is found, I simply set window.location to the URL and the download should start, right? Wrong. A request is made, but for reasons unknown to me, the cookies are stripped and YouTube returns a 403 access denied. This does not happen if the XML requests aren't made before it, i.e. if I just set the window.location to the URL everything works fine, it's when I do the XMLHttpRequest that the cookies aren't sent. It's hard to explain so here's the script: var formats = ["37", "22", "35", "34", "18", ""]; var url = "/get_video?video_id=" + yt.getConfig('SWF_ARGS')['video_id'] + "&t=" + (unescape(yt.getConfig('SWF_ARGS')['t'])) + "&fmt="; for (var i = 0; i < formats.length; i++) { xmlhttp = new XMLHttpRequest; xmlhttp.open("HEAD", url + formats[i], false); xmlhttp.send(null); if (xmlhttp.status != 404) { document.location = url + formats[i]; break } } That script does not send the cookies after setting the document.location and thus does not work. However, simply doing this: document.location = /get_video?video_id=" + yt.getConfig('SWF_ARGS')['video_id'] + "&t=" + (unescape(yt.getConfig('SWF_ARGS')['t'])) DOES send the cookies along with the request, and does work. The only downside is I can't automatically detect the highest quality, I just have to try every "fmt" parameter manually until I get it right. So my question is: why is the XMLHttpRequest object removing cookies from subsequent requests? This is the first time I've ever done anything in JS by the way, so please, go easy on me. ;)

    Read the article

  • How to produce an HTTP 403-equivalent WCF Message from an IErrorHandler?

    - by Andras Zoltan
    I want to write an IErrorHandler implementation that will handle AuthenticationException instances (a proprietary type), and then in the implementation of ProvideFault provide a traditional Http Response with a status code of 403 as the fault message. So far I have my first best guess wired into a service, but WCF appears to be ignoring the output message completely, even though the error handler is being called. At the moment, the code looks like this: public class AuthWeb403ErrorHandler : IErrorHandler { #region IErrorHandler Members public bool HandleError(Exception error) { return error is AuthenticationException; } public void ProvideFault(Exception error, MessageVersion version, ref Message fault) { //first attempt - just a stab in the dark, really HttpResponseMessageProperty property = new HttpResponseMessageProperty(); property.SuppressEntityBody = true; property.StatusCode = System.Net.HttpStatusCode.Forbidden; property.StatusDescription = "Forbidden"; var m = Message.CreateMessage(version, null); m.Properties[HttpResponseMessageProperty.Name] = property; fault = m; } #endregion } With this in place, I just get the standard WCF html 'The server encountered an error processing the request. See server logs for more details.' - which is what would happen if there was no IErrorHandler. Is this a feature of the behaviours added by WebServiceHost? Or is it because the message I'm building is simply wrong!? I can verify that the event log is indeed not receiving anything. My current test environment is a WebGet method (both XML and Json) hosted in a service that is created with the WebServiceHostFactory, and Asp.Net compatibility switched off. The service method simply throws the exception in question.

    Read the article

  • Switching between iScroll and standard WebView Scrolling functionality

    - by Jonathan
    I'm using iScroll 4 for my rather heavy iPhone web/Phonegap app and trying to find a way to switch between iScrolls scrolling functionality and standard "native" webview scrolling, basically on click. Why I want this is described below. My app has several different subpages all in one file. Some subpages have input fields, some don't. As we all know, iScroll + input fields = you're out of luck. I've wrapped iScrolls wrapper div (and all its functionality) around the one sub page where scrolling is crucial, and where there are no input fields. The other sections, I've simply placed outside this div, which gives these no scrolling functionality at all. I've of course tried wrapping all inside the wrapper and enabling/disabling scroll (shown below) but that didn't work me at all: myScroll.disable() myScroll.enable() By placing some sub pages outside the main scrolling area / iscroll div, I've disabled both iScrolls and the standard webview scrolling (the latter - which i guess iScroll does) which leaves me with only basic basic scrolling, hence basically no scrolling at all. One can move around vertically, but once you let go of the screen with, the "scrolling" stops. Quite naturally but alas so nasty. Therefore, I'm searching for a way to enable standard webview scrolling on the sub pages placed outside of iScroll's wrapper div. I've tried different approaches such as the one above and by using: document.removeEventListener('touchmove', preventDefault, false); document.removeEventListener('touchmove', preventDefault, true); But with no success. Sorry for not providing you guys with any hard code or demos to test out for yourselves, it's simply too much code and it would be presented so out of its context, nobody would be able to debug it. So, is there a way n javascript to do this, switching between iScroll scrolling functionality and standard "native" webview scrolling? I would rather not rebuild the entire DOM framework so a solution like the one described above would be preferable.

    Read the article

  • Firebird sequence-backed ID shorthand

    - by pilcrow
    What do others do to simplify the creation of simple, serial surrogate keys populated by a SEQUENCE (a.k.a. GENERATOR) in Firebird = 2.1? I finc the process comparatively arduous: For example, in PostgreSQL, I simply type: pg> CREATE TABLE tbl ( > id SERIAL NOT NULL PRIMARY KEY, > ... In MySQL, I simply type: my> CREATE TABLE tbl ( > id INT NOT NULL PRIMARY KEY AUTO_INCREMENT, > ... But in Firebird I type: fb> CREATE TABLE tbl ( > id BIGINT NOT NULL PRIMARY KEY, > ... fb> CREATE SEQUENCE tbl_id_seq; fb> SET TERM !!; > CREATE TRIGGER tbl_id_trg FOR tbl > ACTIVE BEFORE INSERT POSITION 0 > AS > BEGIN > IF ((new.id IS NULL) OR (new.id <= 0)) THEN > BEGIN > new.id = GEN_ID(tbl_id_seq, 1); > END > END !! > SET TERM ; !! ... and I get pretty bored by the time I reach trigger definition. However, I routinely make SEQUENCE-backed ID fields for temporary, developement and throw-away tables. What do others do to simplify this? Work with an IDE? Run a pre-processing, in-house perl script over the DDL file? Etc.

    Read the article

  • Displaying Many-To-Many Database relationship in VB.NET 2008 with DataGrid, MS SQL 2008

    - by user337501
    Computer bombed while posting this, couldnt find a duplicate question but if there is one, forgive me. So, I've run into a wall. And rather than use a ladder to avoid it, I'd like go through it. I'm setting up what I can best describe as a many-to-many relationship in a database. To examplify, imagine I have three primary tables: Items, Categories, Sections(nevermind the potential redundancy) Then I have another table, Properties. Items, Categories, and Sections can be associated with many properties. A single property can be associated with one, all, or none of the other tables. The best way I can figure to do this is to have join tables make the relationship. i.e. tblItems----(Foreign Key)----tblItems_To_Properties----(Foreign Key)----tblProperties In this example, tblItems simply has an "ItemID" Primary Key. tblItems_To_Properties has its own Primary Key(tblItems_To_PropertiesID), a Foreign Key to the Item(ItemID) and a Foreign key to the Property(PropertyID). The Properties table simply has its primary key(PropertyID) I hope this example isnt too confusing...if I have to I can find a way to put a diagram up or something. My problem is, I want to display this in a DataGrid using the Master-Detail method(DevExpress GridControl). I use the tblItems as a test, and I can see the Items in the parent view, but in the child view I see(understandably) the join table and that is it. My goal is to make it so the Grid ignores the join table and shows the Properties table as the only child. Any help on this method or insight into another solution would be muuuuuuuch appreciat

    Read the article

  • Why do properties require explicit typing during compilation?

    - by ctpenrose
    Compilation using property syntax requires the type of the receiver to be known at compile time. I may not understand something, but this seems like a broken or incomplete compiler implementation considering that Objective-C is a dynamic language. The property "comment" is defined with: @property (nonatomic, retain) NSString *comment; and synthesized with: @synthesize comment; "document" is an instance of one of several classes which conform to: @protocol DocumentComment <NSObject> @property (nonatomic, retain) NSString *comment; @end and is simply declared as: id document; When using the following property syntax: stringObject = document.comment; the following error is generated by gcc: error: request for member 'comment' in something not a structure or union However, the following equivalent receiver-method syntax, compiles without warning or error and works fine, as expected, at run-time: stringObject = [document comment]; I don't understand why properties require the type of the receiver to be known at compile time. Is there something I am missing? I simply use the latter syntax to avoid the error in situations where the receiving object has a dynamic type. Properties seem half-baked.

    Read the article

  • Source Control - XCode - Visual Studio 2005/2008 / 2010

    - by Mick Walker
    My apologies if this has been asked before, I wasnt quite sure if this question should be asked on a programming forum, as it more relates to programming environment than a particular technology, so please accept my (double) appologies if I am posting this in the wrong place, my logic in this case was if it effects the code I write, then this is the place for it. At home, I do a lot of my development on a Mac Pro, I do development for the Mac, iPhone and Windows on this machine (Xcode & Visual Studio - (multiple versions installed in bootcamp, but generally I run it via Parallels)). When visiting a client, I have a similar setup, but on my MacBook Pro. What I want is a source control solution to install on the Mac Pro, that will support both XCode and multiple versions of visual studio, so that when I visit a client, I can simply grab the latest copy from source control via the MacBook Pro. Whilst visiting the client, he / she may suggest changes, and minor ones I would tend to make on site, so I need the ability to merge any modified code back into the trunk of the project / solution when I return home. At the moment, I am using no source control at all, and rely on simply coping folders and overwriting them when I return from a client- thats my 'merge'!!! I was wondering if anyone had any ideas of a source provider I could use, which would support both Windows and Mac development environments, and is cheap (free would be better).

    Read the article

  • Text editor capable of viewing invisibles?

    - by Timo
    A recent problem* left me wondering whether there is a text editor out there that lets you see every single character of the file, even if they are invisible? Specifically, I'm not looking for hex editing capabilities, I am interested in a text editor that'll show me all of the invisible characters (not just the common whitespace / line break characters). The BOM marker is just one example, others are e.g. mathematical invisibles or possibly unsupported characters. I'm not looking for a text editor that simply supports a large variety of text encoding / translations between encodings. All text editors I've come across treat the invisible characters correctly i.e. leave them invisible (or simply get removed in the translation as in the case of the BOM marker). I'm asking this mostly out of academic interests, so I'm not particular about any specific OS. I can easily test Linux and OSX solutions, but if you recommend a Windows editor, I would appreciate if you include descriptions of how the editor handles invisibles other than whitespace / line breaks. *The incident that lead me to this question: I wrote a perl script using TextWrangler and managed to change the encoding to UTF8 BOM, which inserts te BOM marker at the start of the file. Perl (or rather the operating system) promptly misses the #! and mayhem ensues. It then took me the better part of an afternoon to figure this out since most text editors do not show the BOM marker even with various "show invisibles" options turned on. Now I've learned my lesson and will use less immediately :-).

    Read the article

  • How to convert an InputStream to a DataHandler?

    - by pcorey
    I'm working on a java web application in which files will be stored in a database. Originally we retrieved files already in the DB by simply calling getBytes on our result set: byte[] bytes = resultSet.getBytes(1); ... This byte array was then converted into a DataHandler using the obvious constructor: dataHandler=new DataHandler(bytes,"application/octet-stream"); This worked great until we started trying to store and retrieve larger files. Dumping the entire file contents into a byte array and then building a DataHandler out of that simply requires too much memory. My immediate idea is to retrieve a stream of the data in the database with getBinaryStream and somehow convert that InputStream into a DataHandler in a memory-efficient way. Unfortunately it doesn't seem like there's a direct way to convert an InputStream into a DataHandler. Another idea I've been playing with is reading chunks of data from the InputStream and writing them to the OutputStream of the DataHandler. But... I can't find a way to create an "empty" DataHandler that returns a non-null OutputStream when I call getOutputStream... Has anyone done this? I'd appreciate any help you can give me or leads in the right direction.

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >