Search Results

Search found 3021 results on 121 pages for 'min hong tan'.

Page 85/121 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • How to create a link to Nintex Start Workflow Page in the document set home page

    - by ybbest
    In this blog post, I’d like to show you how to create a link to start Nintex Workflow Page in the document set home page. 1. Firstly, you need to upload the latest version of jQuery to the style library of your team site. 2. Then, upload a text file to the style library for writing your own html and JavaScript 3. In the document set home page, insert a new content editor web part and link the text file you just upload. 4. Update the text file with the following content, you can download this file here. <script type="text/javascript" src="/Style%20Library/jquery-1.9.0.min.js"></script> <script type="text/javascript" src="/_layouts/sp.js"></script> <script type="text/javascript"> $(document).ready(function() { listItemId=getParameterByName("ID"); setTheWorkflowLink("YBBESTDocumentLibrary"); }); function buildWorkflowLink(webRelativeUrl,listId,itemId) { var workflowLink =webRelativeUrl+"_layouts/NintexWorkflow/StartWorkflow.aspx?list="+listId+"&ID="+itemId+"&WorkflowName=Start Approval"; return workflowLink; } function getParameterByName(name) { name = name.replace(/[\[]/, "\\\[").replace(/[\]]/, "\\\]"); var regexS = "[\\?&]" + name + "=([^&#]*)"; var regex = new RegExp(regexS); var results = regex.exec(window.location.search); if(results == null){ return ""; } else{ return decodeURIComponent(results[1].replace(/\+/g, " ")); } } function setTheWorkflowLink(listName) { var SPContext = new SP.ClientContext.get_current(); web = SPContext.get_web(); list = web.get_lists().getByTitle(listName); SPContext.load(web,"ServerRelativeUrl"); SPContext.load(list, 'Title', 'Id'); SPContext.executeQueryAsync(setTheWorkflowLink_Success, setTheWorkflowLink_Fail); } function setTheWorkflowLink_Success(sender, args) { var listId = list.get_id(); var listTitle = list.get_title(); var webRelativeUrl = web.get_serverRelativeUrl(); var startWorkflowLink=buildWorkflowLink(webRelativeUrl,listId,listItemId) $("a#submitLink").attr('href',startWorkflowLink); } function setTheWorkflowLink_Fail(sender, args) { alert("There is a problem setting up the submit exam approval link"); } </script> <a href="" target="_blank" id="submitLink"><span style="font-size:14pt">Start the approval process.</span></a> 5. Save your changes and go to the document set Item, you will see the link is on the home page now. Notes: 1. You can create a link to start the workflow using the following build dynamic string configuration: {Common:WebUrl}/_layouts/NintexWorkflow/StartWorkflow.aspx?list={Common:ListID}&ID={ItemProperty:ID}&WorkflowName=workflowname. With this link you will still need to click the start button, this is standard SharePoint behaviour and cannot be altered. References: http://connect.nintex.com/forums/27143/ShowThread.aspx How to use html and JavaScript in Content Editor web part in SharePoint2010

    Read the article

  • Is Agile the new micromanagement?

    - by Smith James
    Hi, This question has been cooking in my head for a while so I wanted to ask those who are following agile/scrum practices in their development environments. My company has finally ventured into incorporating agile practices and has started out with a team of 4 developers in an agile group on a trial basis. It has been 4 months with 3 iterations and they continue to do it without going fully agile for the rest of us. This is due to the fact that management's trust to meet business requirements with a quite a bit of ad hoc type request from high above. Recently, I talked to the developers who are part of this initiative; they tell me that it's not fun. They are not allowed to talk to other developers by their Scrum master and are not allowed to take any phone calls in the work area (which maybe fine to an extent). For example, if I want to talk to my friend for kicks who is in the agile team, I am not allowed without the approval of the Scrum master; who is sitting right next to the agile team. The idea of all this or the agile is to provide a complete vacuum for agile developers from any interruptions and to have them put in good 6+ productive hours. Well, guys, I am no agile guru but what I have read Yahoo agile rollout document and similar for other organizations, it gives me a feeling that agile is not cheap. It require resources and budget to instill agile into the teams and correct issue as they arrive to put them back on track. For starters, it requires training for developers and coaching for managers and etc, etc... The current Scrum master was a manager who took a couple days agile training class paid by the management is now leading this agile team. I have also heard in the meeting that agile manifesto doesn't dictate that agile is not set in stones and is customized differently for each company. Well, it all sounds good and reason. In conclusion, I always thought the agile was supposed to bring harmony in the development teams which results in happy developers. However, I am getting a very opposite feeling when talking to the developers in the agile team. They are unhappy that they cannot talk anything but work, sitting quietly all day just working, and they feel it's just another way for management to make them work more. Tell me please, if this is one of the examples of good practices used for the purpose of selfish advantage for more dollars? Or maybe, it's just us the developers like me and this agile team feels that they don't like to work in an environment where they only breathe work because they are at work. Thanks. Edit: It's a company in healthcare domain that has offices across US, but we're in Texas. It definitely feels like a cowboy style agile which makes me really not wanting to go for agile at all, esp at my current company. All of it has to do with the management being completely cheap. Cutting out expensive coffee for cheaper version, emphasis on savings and being productive while staying as lean as possible. My feeling is that someone in the management behind the door threw out this idea, that agile makes you produce more so we can show our bosses we're producing more with the same headcount. Or, maybe, it will allow us to reduce headcount if that's the case. EDITED: They are having their 5 min daily meeting. But not allowed to chat or talk with someone outside of their team. All focus is on work.

    Read the article

  • Is Agile the new micromanagement?

    - by Smith James
    This question has been cooking in my head for a while so I wanted to ask those who are following agile/scrum practices in their development environments. My company has finally ventured into incorporating agile practices and has started out with a team of 4 developers in an agile group on a trial basis. It has been 4 months with 3 iterations and they continue to do it without going fully agile for the rest of us. This is due to the fact that management's trust to meet business requirements with a quite a bit of ad hoc type request from high above. Recently, I talked to the developers who are part of this initiative; they tell me that it's not fun. They are not allowed to talk to other developers by their Scrum master and are not allowed to take any phone calls in the work area (which maybe fine to an extent). For example, if I want to talk to my friend for kicks who is in the agile team, I am not allowed without the approval of the Scrum master; who is sitting right next to the agile team. The idea of all this or the agile is to provide a complete vacuum for agile developers from any interruptions and to have them put in good 6+ productive hours. Well, guys, I am no agile guru but what I have read Yahoo agile rollout document and similar for other organizations, it gives me a feeling that agile is not cheap. It require resources and budget to instill agile into the teams and correct issue as they arrive to put them back on track. For starters, it requires training for developers and coaching for managers and etc, etc... The current Scrum master was a manager who took a couple days agile training class paid by the management is now leading this agile team. I have also heard in the meeting that agile manifesto doesn't dictate that agile is not set in stones and is customized differently for each company. Well, it all sounds good and reason. In conclusion, I always thought the agile was supposed to bring harmony in the development teams which results in happy developers. However, I am getting a very opposite feeling when talking to the developers in the agile team. They are unhappy that they cannot talk anything but work, sitting quietly all day just working, and they feel it's just another way for management to make them work more. Tell me please, if this is one of the examples of good practices used for the purpose of selfish advantage for more dollars? Or maybe, it's just us the developers like me and this agile team feels that they don't like to work in an environment where they only breathe work because they are at work. Thanks. Edit: It's a company in healthcare domain that has offices across US. It definitely feels like a cowboy style agile which makes me really not wanting to go for agile at all, esp at my current company. All of it has to do with the management being completely cheap. Cutting out expensive coffee for cheaper version, emphasis on savings and being productive while staying as lean as possible. My feeling is that someone in the management behind the door threw out this idea, that agile makes you produce more so we can show our bosses we're producing more with the same headcount. Or, maybe, it will allow us to reduce headcount if that's the case. EDITED: They are having their 5 min daily meeting. But not allowed to chat or talk with someone outside of their team. All focus is on work.

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 22 (sys.dm_db_index_physical_stats)

    - by Tamarick Hill
    The sys.dm_db_index_physical_stats Dynamic Management Function is used to return information about the fragmentation levels, page counts, depth, number of levels, record counts, etc. about the indexes on your database instance. One row is returned for each level in a given index, which we will discuss more later. The function takes a total of 5 input parameters which are (1) database_id, (2) object_id, (3) index_id, (4) partition_number, and (5) the mode of the scan level that you would like to run. Let’s use this function with our AdventureWorks2012 database to better illustrate the information it provides. SELECT * FROM sys.dm_db_index_physical_stats(db_id('AdventureWorks2012'), NULL, NULL, NULL, NULL) As you can see from the result set, there is a lot of beneficial information returned from this DMF. The first couple of columns in the result set (database_id, object_id, index_id, partition_number, index_type_desc, alloc_unit_type_desc) are either self-explanatory or have been explained in our previous blog sessions so I will not go into detail about these at this time. The next column in the result set is the index_depth which represents how deep the index goes. For example, If we have a large index that contains 1 root page, 3 intermediate levels, and 1 leaf level, our index depth would be 5. The next column is the index_level which refers to what level (of the depth) a particular row is referring to. Next is probably one of the most beneficial columns in this result set, which is the avg_fragmentation_in_percent. This column shows you how fragmented a particular level of an index may be. Many people use this column within their index maintenance jobs to dynamically determine whether they should do REORG’s or full REBUILD’s of a given index. The fragment count represents the number of fragments in a leaf level while the avg_fragment_size_in_pages represents the number of pages in a fragment. The page_count column tells you how many pages are in a particular index level. From my result set above, you see the the remaining columns all have NULL values. This is because I did not specify a ‘mode’ in my query and as a result it used the ‘LIMITED’ mode by default. The LIMITED mode is meant to be lightweight so it does collect information for every column in the result set. I will re-run my query again using the ‘DETAILED’ mode and you will see we now have results for these rows. SELECT * FROM sys.dm_db_index_physical_stats(db_id('AdventureWorks2012'), NULL, NULL, NULL, ‘DETAILED’)   From the remaining columns, you see we get even more detailed information such as how many records are in a particular index level (record_count). We have a column for ghost_record_count which represents the number of records that have been marked for deletion, but have not physically been removed by the background ghost cleanup process. We later see information on the MIN, MAX, and AVG record size in bytes. The forwarded_record_count column refers to records that have been updated and now no longer fit within the row on the page anymore and thus have to be moved. A forwarded record is left in the original location with a pointer to the new location. The last column in the result set is the compressed_page_count column which tells you how many pages in your index have been compressed. This is a very powerful DMF that returns good information about the current indexes in your system. However, based on the mode you select, it could be a very resource intensive function so be careful with how you use it. For more information on this Dynamic Management Function, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms188917.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • MVC 2 jQuery Client-side Validation

    - by nmarun
    Well, I watched Phil Haack’s show What's New in Microsoft ASP.NET MVC 2 and was impressed about the client-side validation (starts at 17:45) that MVC 2 offers. I tried creating the same, but Phil does not show what .js files need to be included and also I was not able to find the source code for the application that he used. In order to find out the required JavaScript file references, I added all of the files in my application to the page and ran it. Of course it worked, but this is definitely not an optimum solution. By removing one at a time and testing the app, I’ve short-listed the following ones: 1: <script src="../../Scripts/jquery-1.4.1.min.js" type="text/javascript"></script 2: <script src="../../Scripts/MicrosoftAjax.js" type="text/javascript"></script> 3: <script src="../../Scripts/MicrosoftMvcValidation.js" type="text/javascript"></script> Now, a little about the feature itself. Say, I’m working with a Book application so my model will look something like: 1: public class Book 2: { 3: [HiddenInput(DisplayValue = false)] 4: public int BookId { get; set; } 5:  6: [DisplayName("Book Title")] 7: [Required(ErrorMessage = "Book title is required")] 8: [StringLength(20, ErrorMessage = "Must be under 20 characters")] 9: public string Title { get; set; } 10:  11: [Required(ErrorMessage = "Author is required")] 12: [StringLength(40, ErrorMessage = "Must be under 40 characters")] 13: public string Author { get; set; } 14:  15: public decimal Price { get; set; } 16: 17: [DisplayName("ISBN")] 18: [StringLength(13, ErrorMessage = "Must be 13 characters")] 19: public string Isbn { get; set; } 20: } This ensures that the data passed will be validated upon post. But what would happen if you add the line (along with the above mentioned .js files): 1: <% Html.EnableClientValidation(); %> Now, this acts as ‘on-the-fly’ or ‘real-time’ validation. Now, when the user types 20 characters for the Title, the error shows up right on the 21st character. Beautiful… and you do not have to create the JavaScript function(s) for this. They’re auto-magically created for you. (Doing a ‘View Source’ on the browser page shows you the JavaScript logic that goes on behind the scenes). I bumped into another post that shows how .net 4 allows us to create custom validation attributes: Dynamic Range validation in MVC 2. This will help us attach virtually any business logic to the model itself. Please see the source code I’ve worked with.

    Read the article

  • Am I right about the differences between Floyd-Warshall, Dijkstra's and Bellman-Ford algorithms?

    - by Programming Noob
    I've been studying the three and I'm stating my inferences from them below. Could someone tell me if I have understood them accurately enough or not? Thank you. Dijkstra's algorithm is used only when you have a single source and you want to know the smallest path from one node to another, but fails in cases like this Floyd-Warshall's algorithm is used when any of all the nodes can be a source, so you want the shortest distance to reach any destination node from any source node. This only fails when there are negative cycles (this is the most important one. I mean, this is the one I'm least sure about:) 3.Bellman-Ford is used like Dijkstra's, when there is only one source. This can handle negative weights and its working is the same as Floyd-Warshall's except for one source, right? If you need to have a look, the corresponding algorithms are (courtesy Wikipedia): Bellman-Ford: procedure BellmanFord(list vertices, list edges, vertex source) // This implementation takes in a graph, represented as lists of vertices // and edges, and modifies the vertices so that their distance and // predecessor attributes store the shortest paths. // Step 1: initialize graph for each vertex v in vertices: if v is source then v.distance := 0 else v.distance := infinity v.predecessor := null // Step 2: relax edges repeatedly for i from 1 to size(vertices)-1: for each edge uv in edges: // uv is the edge from u to v u := uv.source v := uv.destination if u.distance + uv.weight < v.distance: v.distance := u.distance + uv.weight v.predecessor := u // Step 3: check for negative-weight cycles for each edge uv in edges: u := uv.source v := uv.destination if u.distance + uv.weight < v.distance: error "Graph contains a negative-weight cycle" Dijkstra: 1 function Dijkstra(Graph, source): 2 for each vertex v in Graph: // Initializations 3 dist[v] := infinity ; // Unknown distance function from 4 // source to v 5 previous[v] := undefined ; // Previous node in optimal path 6 // from source 7 8 dist[source] := 0 ; // Distance from source to source 9 Q := the set of all nodes in Graph ; // All nodes in the graph are 10 // unoptimized - thus are in Q 11 while Q is not empty: // The main loop 12 u := vertex in Q with smallest distance in dist[] ; // Start node in first case 13 if dist[u] = infinity: 14 break ; // all remaining vertices are 15 // inaccessible from source 16 17 remove u from Q ; 18 for each neighbor v of u: // where v has not yet been 19 removed from Q. 20 alt := dist[u] + dist_between(u, v) ; 21 if alt < dist[v]: // Relax (u,v,a) 22 dist[v] := alt ; 23 previous[v] := u ; 24 decrease-key v in Q; // Reorder v in the Queue 25 return dist; Floyd-Warshall: 1 /* Assume a function edgeCost(i,j) which returns the cost of the edge from i to j 2 (infinity if there is none). 3 Also assume that n is the number of vertices and edgeCost(i,i) = 0 4 */ 5 6 int path[][]; 7 /* A 2-dimensional matrix. At each step in the algorithm, path[i][j] is the shortest path 8 from i to j using intermediate vertices (1..k-1). Each path[i][j] is initialized to 9 edgeCost(i,j). 10 */ 11 12 procedure FloydWarshall () 13 for k := 1 to n 14 for i := 1 to n 15 for j := 1 to n 16 path[i][j] = min ( path[i][j], path[i][k]+path[k][j] );

    Read the article

  • Dynamically creating meta tags in asp.net mvc

    - by Jalpesh P. Vadgama
    As we all know that Meta tag has very important roles in Search engine optimization and if we want to have out site listed with good ranking on search engines then we have to put meta tags. Before some time I have blogged about dynamically creating meta tags in asp.net 2.0/3.5 sites, in this blog post I am going to explain how we can create a meta tag dynamically very easily. To have meta tag dynamically we have to create a meta tag on server-side. So I have created a method like following. public string HomeMetaTags() { System.Text.StringBuilder strMetaTag = new System.Text.StringBuilder(); strMetaTag.AppendFormat(@"<meta content='{0}' name='Keywords'/>","Home Action Keyword"); strMetaTag.AppendFormat(@"<meta content='{0}' name='Descption'/>", "Home Description Keyword"); return strMetaTag.ToString(); } Here you can see that I have written a method which will return a string with meta tags. Here you can write any logic you can fetch it from the database or you can even fetch it from xml based on key passed. For the demo purpose I have written that hardcoded. So it will create a meta tag string and will return it. Now I am going to store that meta tag in ViewBag just like we have a title tag. In this post I am going to use standard template so we have our title tag there in viewbag message. Same way I am going save meta tag like following in ViewBag. public ActionResult Index() { ViewBag.Message = "Welcome to ASP.NET MVC!"; ViewBag.MetaTag = HomeMetaTags(); return View(); } Here in the above code you can see that I have stored MetaTag ViewBag. Now as I am using standard ASP.NET MVC3 template so we have our we have out head element in Shared folder _layout.cshtml file. So to render meta tag I have modified the Head tag part of _layout.cshtml like following. <head> <title>@ViewBag.Title</title> <link href="@Url.Content("~/Content/Site.css")" rel="stylesheet" type="text/css" /> <script src="@Url.Content("~/Scripts/jquery-1.5.1.min.js")" type="text/javascript"></script> @Html.Raw(ViewBag.MetaTag) </head> Here in the above code you can see I have use @Html.Raw method to embed meta tag in _layout.cshtml page. This HTML.Raw method will embed output to head tag section without encoding html. As we have already taken care of html tag in string function we don’t need the html encoding. Now it’s time to run application in browser. Now once you run your application in browser and click on view source you will find meta tag for home page as following. That’s its It’s very easy to create dynamically meta tag. Hope you liked it.. Stay tuned for more.. Till then happy programming.

    Read the article

  • P90X or How I Stopped Worrying and Love Exercise

    - by Matt Christian
    Last Wednesday, after many UPS delivery failures, I received P90X in the mail.  P90X is a series of DVD's and a nutrition guide you use to shed pounds and gain muscle.  Odds are you've seen the infomercial on TV at some point if you watch a little tube now and again.  I started last Thursday and am still standing to tell this tale. At it's core, P90X is a 12 DVD set of exercise videos.  Each video is comprised of a different workout routine that typically last around an hour (some up to 1 1/2 hours).  Every day you are supposed to do one of the workouts which are different every day (sometimes you may repeat a shorter 6 min workout dedicated to abs twice a week).  There are different 'programs' focused on different areas, for weight loss you do the Lean Program, standard weight loss and muscle gain do the Regular Program, and for those hardcore health-nuts, the Insane Program (which consists of 2 - 1 hour long exercises per day).  Each Program has a different set of workouts per week which you repeat for 3 weeks, followed by a 'Relaxation Week' which is essentially a slightly different order.  After the month of workouts is over, you've finished 1 phase out of 3.  P90X takes 90 days, split into 3 Phases (1 phase per month).  Every phase has a different workout order which is also focused on different areas (Weight Loss, Muscle Gain, etc...)  With the DVD's you also get a small glossy book of about 100 pages detailing the different workouts and the different programs as well as a sample workout to see if you're even ready to start P90X. The second part of P90X, which can also be considered the 'core' (actually the other half of the core) is the nutrition guide that is included.  The Nutrition Guide is a book similar to the one that defines the exercises (about 100 glossy pages) though it details foods you should eat, the amounts, and a number of healthy (and tasty!) recipes.  The guide is split up into 3 phases as well, promoting high protein and low carb/dairy at during Phase 1, and levelling off through to Phase 3 where you have a relatively balanced amount of every food group. So after 1 week where am I?  I've stuck quite close to the nutrition guide (there isn't 'diet food' in here people, it's ACTUALLY food) and done my exercise every day.  I think a lot of the first week is getting into the whole idea and learning the moves performed on the DVD.  Have I lost weight?  No.  Do I feel some definition already starting to poke out?  Absolutely (no pun intended). Tony Horton (the 51-year old hulk that runs the whole thing) is very fun to listen and work along with and the 'diet' really isn't too hard to follow unless all you eat is carbs.  I've tried the gym thing and could not get motivated enough to continue going.  P90X is the first time I've ached from a workout, BEFORE starting my next workout.  For anyone interested, Google 'P90X' or 'BeachBody' to find out more information about this awesome program!

    Read the article

  • SQL SERVER – Why Do We Need Data Quality Services – Importance and Significance of Data Quality Services (DQS)

    - by pinaldave
    Databases are awesome.  I’m sure my readers know my opinion about this – I have made SQL Server my life’s work after all!  I love technology and all things computer-related.  Of course, even with my love for technology, I have to admit that it has its limits.  For example, it takes a human brain to notice that data has been input incorrectly.  Computer “brains” might be faster than humans, but human brains are still better at pattern recognition.  For example, a human brain will notice that “300” is a ridiculous age for a human to be, but to a computer it is just a number.  A human will also notice similarities between “P. Dave” and “Pinal Dave,” but this would stump most computers. In a database, these sorts of anomalies are incredibly important.  Databases are often used by multiple people who rely on this data to be true and accurate, so data quality is key.  That is why the improved SQL Server features Master Data Management talks about Data Quality Services.  This service has the ability to recognize and flag anomalies like out of range numbers and similarities between data.  This allows a human brain with its pattern recognition abilities to double-check and ensure that P. Dave is the same as Pinal Dave. A nice feature of Data Quality Services is that once you set the rules for the program to follow, it will not only keep your data organized in the future, but go to the past and “fix up” any data that has already been entered.  It also allows you do combine data from multiple places and it will apply these rules across the board, so that you don’t have any weird issues that crop up when trying to fit a round peg into a square hole. There are two parts of Data Quality Services that help you accomplish all these neat things.  The first part is DQL Server, which you can think of as the hardware component of the system.  It is installed on the side of (it needs to install separately after SQL Server is installed) SQL Server and runs quietly in the background, performing all its cleanup services. DQS Client is the user interface that you can interact with to set the rules and check over your data.  There are three main aspects of Client: knowledge base management, data quality projects and administration.  Knowledge base management is the part of the system that allows you to set the rules, or program the “knowledge base,” so that your database is clean and consistent. Data Quality projects are what run in the background and clean up the data that is already present.  The administration allows you to check out what DQS Client is doing, change rules, and generally oversee the entire process.  The whole process is user-friendly and a pleasure to use.  I highly recommend implementing Data Quality Services in your database. Here are few of my blog posts which are related to Data Quality Services and I encourage you to try this out. SQL SERVER – Installing Data Quality Services (DQS) on SQL Server 2012 SQL SERVER – Step by Step Guide to Beginning Data Quality Services in SQL Server 2012 – Introduction to DQS SQL SERVER – DQS Error – Cannot connect to server – A .NET Framework error occurred during execution of user-defined routine or aggregate “SetDataQualitySessions” – SetDataQualitySessionPhaseTwo SQL SERVER – Configuring Interactive Cleansing Suggestion Min Score for Suggestions in Data Quality Services (DQS) – Sensitivity of Suggestion SQL SERVER – Unable to DELETE Project in Data Quality Projects (DQS) Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Data Quality Services, DQS

    Read the article

  • Fastest pathfinding for static node matrix

    - by Sean Martin
    I'm programming a route finding routine in VB.NET for an online game I play, and I'm searching for the fastest route finding algorithm for my map type. The game takes place in space, with thousands of solar systems connected by jump gates. The game devs have provided a DB dump containing a list of every system and the systems it can jump to. The map isn't quite a node tree, since some branches can jump to other branches - more of a matrix. What I need is a fast pathfinding algorithm. I have already implemented an A* routine and a Dijkstra's, both find the best path but are too slow for my purposes - a search that considers about 5000 nodes takes over 20 seconds to compute. A similar program on a website can do the same search in less than a second. This website claims to use D*, which I have looked into. That algorithm seems more appropriate for dynamic maps rather than one that does not change - unless I misunderstand it's premise. So is there something faster I can use for a map that is not your typical tile/polygon base? GBFS? Perhaps a DFS? Or have I likely got some problem with my A* - maybe poorly chosen heuristics or movement cost? Currently my movement cost is the length of the jump (the DB dump has solar system coordinates as well), and the heuristic is a quick euclidean calculation from the node to the goal. In case anyone has some optimizations for my A*, here is the routine that consumes about 60% of my processing time, according to my profiler. The coordinateData table contains a list of every system's coordinates, and neighborNode.distance is the distance of the jump. Private Function findDistance(ByVal startSystem As Integer, ByVal endSystem As Integer) As Integer 'hCount += 1 'If hCount Mod 0 = 0 Then 'Return hCache 'End If 'Initialize variables to be filled Dim x1, x2, y1, y2, z1, z2 As Integer 'LINQ queries for solar system data Dim systemFromData = From result In jumpDataDB.coordinateDatas Where result.systemId = startSystem Select result.x, result.y, result.z Dim systemToData = From result In jumpDataDB.coordinateDatas Where result.systemId = endSystem Select result.x, result.y, result.z 'LINQ execute 'Fill variables with solar system data for from and to system For Each solarSystem In systemFromData x1 = (solarSystem.x) y1 = (solarSystem.y) z1 = (solarSystem.z) Next For Each solarSystem In systemToData x2 = (solarSystem.x) y2 = (solarSystem.y) z2 = (solarSystem.z) Next Dim x3 = Math.Abs(x1 - x2) Dim y3 = Math.Abs(y1 - y2) Dim z3 = Math.Abs(z1 - z2) 'Calculate distance and round 'Dim distance = Math.Round(Math.Sqrt(Math.Abs((x1 - x2) ^ 2) + Math.Abs((y1 - y2) ^ 2) + Math.Abs((z1 - z2) ^ 2))) Dim distance = firstConstant * Math.Min(secondConstant * (x3 + y3 + z3), Math.Max(x3, Math.Max(y3, z3))) 'Dim distance = Math.Abs(x1 - x2) + Math.Abs(z1 - z2) + Math.Abs(y1 - y2) 'hCache = distance Return distance End Function And the main loop, the other 30% 'Begin search While openList.Count() != 0 'Set current system and move node to closed currentNode = lowestF() move(currentNode.id) For Each neighborNode In neighborNodes If Not onList(neighborNode.toSystem, 0) Then If Not onList(neighborNode.toSystem, 1) Then Dim newNode As New nodeData() newNode.id = neighborNode.toSystem newNode.parent = currentNode.id newNode.g = currentNode.g + neighborNode.distance newNode.h = findDistance(newNode.id, endSystem) newNode.f = newNode.g + newNode.h newNode.security = neighborNode.security openList.Add(newNode) shortOpenList(OLindex) = newNode.id OLindex += 1 Else Dim proposedG As Integer = currentNode.g + neighborNode.distance If proposedG < gValue(neighborNode.toSystem) Then changeParent(neighborNode.toSystem, currentNode.id, proposedG) End If End If End If Next 'Check to see if done If currentNode.id = endSystem Then Exit While End If End While If clarification is needed on my spaghetti code, I'll try to explain.

    Read the article

  • SQL SERVER – Select the Most Optimal Backup Methods for Server

    - by pinaldave
    Backup and Restore are very interesting concepts and one should be very much with the concept if you are dealing with production database. One never knows when a natural disaster or user error will surface and the first thing everybody wants is to get back on point in time when things were all fine. Well, in this article I have attempted to answer a few of the common questions related to Backup methodology. How to Select a SQL Server Backup Type In order to select a proper SQL Server backup type, a SQL Server administrator needs to understand the difference between the major backup types clearly. Since a picture is worth a thousand words, let me offer it to you below. Select a Recovery Model First The very first question that you should ask yourself is: Can I afford to lose at least a little (15 min, 1 hour, 1 day) worth of data? Resist the temptation to save it all as it comes with the overhead – majority of businesses outside finances can actually afford to lose a bit of data. If your answer is YES, I can afford to lose some data – select a SIMPLE (default) recovery model in the properties of your database, otherwise you need to select a FULL recovery model. The additional advantage of the Full recovery model is that it allows you to restore the data to a specific point in time vs to only last backup time in the Simple recovery model, but it exceeds the scope of this article Backups in SIMPLE Recovery Model In SIMPLE recovery model you can select to do just Full backups or Full + Differential. Full Backup This is the simplest type of backup that contains all information needed to restore the database and should be your first choice. It is often sufficient for small databases, but note that it makes a big impact on the performance of your database Full + Differential Backup After Full, Differential backup picks up all of the changes since the last Full backup. This means if you made Full, Diff, Diff backup – the last Diff backup contains all of the changes and you don’t need the previous Differential backup. Differential backup is obviously smaller and carries less performance overhead Backups in FULL Recovery Model In FULL recovery model you can select Full + Transaction Log or Full + Differential + Transaction Log backup. You have to create Transaction Log backup, because at that time the log is being truncated. Otherwise your Transaction Log will grow uncontrollably. Full + Transaction Log Backup You would always need to perform a Full backup first. Then a series of Transaction log backup. Note that (in contrast to Differential) you need ALL transactions to log since the last Full of Diff backup to properly restore. Transaction log backups have the smallest performance overhead and can be performed often. Full + Differential + Transaction Log Backup If you want to ease the performance overhead on your server, you can replace some of the Full backup in the previous scenario with Differential. You restore scenario would start from Full, then the Last Differential, then all of the remaining transactions log backups Typical backup Scenarios You may say “Well, it is all nice – give me the examples now”. As you may already know, my favorite SQL backup software is SQLBackupAndFTP. If you go to Advanced Backup Schedule form in this program and click “Load a typical backup plan…” link, it will give you these scenarios that I think are quite common – see the image below. The Simplest Way to Schedule SQL Backups I hate to repeat myself, but backup scheduling in SQL agent leaves a lot to be desired. I do not know the simple way to schedule your SQL server backups than in SQLBackupAndFTP – see the image below. The whole backup scheduling with compression, encryption and upload to a Network Folder / HDD / NAS Drive / FTP / Dropbox / Google Drive / Amazon S3 takes just a few minutes – see my previous post for the review. Final Words This post offered an explanation for major backup types only. For more complicated scenarios or to research other options as usually go to MSDN. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Ubuntu + Wacom Intuos 4 + MyPaint HELP!

    - by Sativa
    Please keep in mind I'm not that computer savvy, but I will try any suggestion so please help me out! My tablet will stop working if the USB connection is ever broken, or the Ubuntu software is being updated. Sometimes it will stop working for no reason that I can see. The lights will still be on, but it won't be responsive. It doesn't work again until I restart the laptop with the tablet plugged in, which is grating if you have to do it every 25 min. or so... I'm not sure if the issue is with the port, the tablet/cable or the driver but any suggestions would be very welcome! Also, MyPaint is frequently having hiccups. It seems to save fine but at times it will randomly close down and when I open files they're often empty. They turn into 0Kb files and only contain a single empty layer. Also very grating, considering I lose days of work for no clear reason and without any heads up. Again, I'm not sure if the issue is with the port, the tablet/cable or the driver but any suggestions would be very welcome! The error message reads; Traceback (most recent call last): File "/usr/share/mypaint/gui/application.py", line 177, at_application_start(*junk=()) else: self.filehandler.open_file(fn) variables: {'fn': ('local', u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora'), 'self.filehandler.open_file': ('local', <bound method FileHandler.wrapper of <gui.filehandling.FileHandler object at 0x7fdb89063a10>>)} File "/usr/share/mypaint/gui/drawwindow.py", line 60, wrapper(self=<gui.filehandling.FileHandler object>, *args=(u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora',), **kwargs={}) try: func(self, *args, **kwargs) # gtk main loop may be called in here... variables: {'self': ('local', <gui.filehandling.FileHandler object at 0x7fdb89063a10>), 'args': ('local', (u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora',)), 'func': ('local', <function open_file at 0x7fdb8b397b90>), 'kwargs': ('local', {})} File "/usr/share/mypaint/gui/filehandling.py", line 231, open_file(self=<gui.filehandling.FileHandler object>, filename=u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora') try: self.doc.model.load(filename, feedback_cb=self.gtk_main_tick) except document.SaveLoadError, e: variables: {'self.doc.model.load': ('local', <bound method Document.load of <lib.document.Document instance at 0x7fdb8ab4f8c0>>), 'feedback_cb': (None, []), 'self.gtk_main_tick': ('local', <function gtk_main_tick at 0x7fdb8b397b18>), 'filename': ('local', u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora')} File "/usr/share/mypaint/lib/document.py", line 544, load(self=<lib.document.Document instance>, filename=u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora', **kwargs={'feedback_cb': <function gtk_main_tick>}) try: load(filename, **kwargs) except gobject.GError, e: variables: {'load': ('local', <bound method Document.load_ora of <lib.document.Document instance at 0x7fdb8ab4f8c0>>), 'kwargs': ('local', {'feedback_cb': <function gtk_main_tick at 0x7fdb8b397b18>}), 'filename': ('local', u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora')} File "/usr/share/mypaint/lib/document.py", line 772, load_ora(self=<lib.document.Document instance>, filename=u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora', feedback_cb=<function gtk_main_tick>) tempdir = tempdir.decode(sys.getfilesystemencoding()) z = zipfile.ZipFile(filename) print 'mimetype:', z.read('mimetype').strip() variables: {'zipfile.ZipFile': ('global', <class 'zipfile.ZipFile'>), 'z': (None, []), 'filename': ('local', u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora')} File "/usr/lib/python2.7/zipfile.py", line 770, __init__(self=<zipfile.ZipFile object>, file=u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora', mode='r', compression=0, allowZip64=False) if key == 'r': self._RealGetContents() elif key == 'w': variables: {'self._RealGetContents': ('local', <bound method ZipFile._RealGetContents of <zipfile.ZipFile object at 0x7fdb9b952790>>)} File "/usr/lib/python2.7/zipfile.py", line 811, _RealGetContents(self=<zipfile.ZipFile object>) if not endrec: raise BadZipfile, "File is not a zip file" if self.debug > 1: variables: {'BadZipfile': ('global', <class 'zipfile.BadZipfile'>)} BadZipfile: File is not a zip file

    Read the article

  • Script to UPDATE STATISTICS with time window

    - by Bill Graziano
    I recently spent some time troubleshooting odd query plans and came to the conclusion that we needed better statistics.  We’ve been running sp_updatestats but apparently it wasn’t sampling enough of the table to get us what we needed.  I have a pretty limited window at night where I can hammer the disks while this runs.  The script below just calls UPDATE STATITICS on all tables that “need” updating.  It defines need as any table whose statistics are older than the number of days you specify (30 by default).  It also has a throttle so it breaks out of the loop after a set amount of time (60 minutes).  That means it won’t start processing a new table after this time but it might take longer than this to finish what it’s doing.  It always processes the oldest statistics first so it will eventually get to all of them.  It defaults to sample 25% of the table.  I’m not sure that’s a good default but it works for now.  I’ve tested this in SQL Server 2005 and SQL Server 2008.  I liked the way Michelle parameterized her re-index script and I took the same approach. CREATE PROCEDURE dbo.UpdateStatistics ( @timeLimit smallint = 60 ,@debug bit = 0 ,@executeSQL bit = 1 ,@samplePercent tinyint = 25 ,@printSQL bit = 1 ,@minDays tinyint = 30 )AS/******************************************************************* Copyright Bill Graziano 2010*******************************************************************/SET NOCOUNT ON;PRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + 'Launching...'IF OBJECT_ID('tempdb..#status') IS NOT NULL DROP TABLE #status;CREATE TABLE #status( databaseID INT , databaseName NVARCHAR(128) , objectID INT , page_count INT , schemaName NVARCHAR(128) Null , objectName NVARCHAR(128) Null , lastUpdateDate DATETIME , scanDate DATETIME CONSTRAINT PK_status_tmp PRIMARY KEY CLUSTERED(databaseID, objectID));DECLARE @SQL NVARCHAR(MAX);DECLARE @dbName nvarchar(128);DECLARE @databaseID INT;DECLARE @objectID INT;DECLARE @schemaName NVARCHAR(128);DECLARE @objectName NVARCHAR(128);DECLARE @lastUpdateDate DATETIME;DECLARE @startTime DATETIME;SELECT @startTime = GETDATE();DECLARE cDB CURSORREAD_ONLYFOR select [name] from master.sys.databases where database_id > 4OPEN cDBFETCH NEXT FROM cDB INTO @dbNameWHILE (@@fetch_status <> -1)BEGIN IF (@@fetch_status <> -2) BEGIN SELECT @SQL = ' use ' + QUOTENAME(@dbName) + ' select DB_ID() as databaseID , DB_NAME() as databaseName ,t.object_id ,sum(used_page_count) as page_count ,s.[name] as schemaName ,t.[name] AS objectName , COALESCE(d.stats_date, ''1900-01-01'') , GETDATE() as scanDate from sys.dm_db_partition_stats ps join sys.tables t on t.object_id = ps.object_id join sys.schemas s on s.schema_id = t.schema_id join ( SELECT object_id, MIN(stats_date) as stats_date FROM ( select object_id, stats_date(object_id, stats_id) as stats_date from sys.stats) as d GROUP BY object_id ) as d ON d.object_id = t.object_id where ps.row_count > 0 group by s.[name], t.[name], t.object_id, COALESCE(d.stats_date, ''1900-01-01'') ' SET ANSI_WARNINGS OFF; Insert #status EXEC ( @SQL); SET ANSI_WARNINGS ON; END FETCH NEXT FROM cDB INTO @dbNameENDCLOSE cDBDEALLOCATE cDBDECLARE cStats CURSORREAD_ONLYFOR SELECT databaseID , databaseName , objectID , schemaName , objectName , lastUpdateDate FROM #status WHERE DATEDIFF(dd, lastUpdateDate, GETDATE()) >= @minDays ORDER BY lastUpdateDate ASC, page_count desc, [objectName] ASC OPEN cStatsFETCH NEXT FROM cStats INTO @databaseID, @dbName, @objectID, @schemaName, @objectName, @lastUpdateDateWHILE (@@fetch_status <> -1)BEGIN IF (@@fetch_status <> -2) BEGIN IF DATEDIFF(mi, @startTime, GETDATE()) > @timeLimit BEGIN PRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + '*** Time Limit Reached ***'; GOTO __DONE; END SELECT @SQL = 'UPDATE STATISTICS ' + QUOTENAME(@dBName) + '.' + QUOTENAME(@schemaName) + '.' + QUOTENAME(@ObjectName) + ' WITH SAMPLE ' + CAST(@samplePercent AS NVARCHAR(100)) + ' PERCENT;'; IF @printSQL = 1 PRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + @SQL + ' (Last Updated: ' + CAST(@lastUpdateDate AS VARCHAR(100)) + ')' IF @executeSQL = 1 BEGIN EXEC (@SQL); END END FETCH NEXT FROM cStats INTO @databaseID, @dbName, @objectID, @schemaName, @objectName, @lastUpdateDateEND__DONE:CLOSE cStatsDEALLOCATE cStatsPRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + 'Completed.'GO

    Read the article

  • Restrict number of characters to be typed for af:autoSuggestBehavior

    - by Arunkumar Ramamoorthy
    When using AutoSuggestBehavior for a UI Component, the auto suggest list is displayed as soon as the user starts typing in the field. In this article, we will find how to restrict the autosuggest list to be displayed till the user types in couple of characters. This would be more useful in the low latency networks and also the autosuggest list is bigger. We could display a static message to let the user know that they need to type in more characters to get a list for picking a value from. Final output we would expect is like the below image Lets see how we can implement this. Assuming we have an input text for the users to enter the country name and an autosuggest behavior is added to it. <af:inputText label="Country" id="it1"> <af:autoSuggestBehavior /> </af:inputText> Also, assuming we have a VO (we'll name it as CountryView for this example), with a view criteria to filter out the VO based on the bind variable passed. Now, we would generate View Impl class from the java node (including bind variables) and then expose the setter method of the bind variable to client interface. In the View layer, we would create a tree binding for the VO and the method binding for the setter method of the bind variable exposed above, in the pagedef file As we've already added an input text and an autosuggestbehavior for the test, we would not need to build the suggested items for the autosuggest list.Let us add a method in the backing bean to return us List of select items to be bound to the autosuggest list. padding: 5px; background-color: #fbfbfb; min-height: 40px; width: 544px; height: 168px; overflow: auto;"> public List onSuggest(String searchTerm) { ArrayList<SelectItem> selectItems = new ArrayList<SelectItem>(); if(searchTerm.length()>1) { //get access to the binding context and binding container at runtime BindingContext bctx = BindingContext.getCurrent(); BindingContainer bindings = bctx.getCurrentBindingsEntry(); //set the bind variable value that is used to filter the View Object //query of the suggest list. The View Object instance has a View //Criteria assigned OperationBinding setVariable = (OperationBinding) bindings.get("setBind_CountryName"); setVariable.getParamsMap().put("value", searchTerm); setVariable.execute(); //the data in the suggest list is queried by a tree binding. JUCtrlHierBinding hierBinding = (JUCtrlHierBinding) bindings.get("CountryView1"); //re-query the list based on the new bind variable values hierBinding.executeQuery(); //The rangeSet, the list of queries entries, is of type //JUCtrlValueBndingRef. List<JUCtrlValueBindingRef> displayDataList = hierBinding.getRangeSet(); for (JUCtrlValueBindingRef displayData : displayDataList){ Row rw = displayData.getRow(); //populate the SelectItem list selectItems.add(new SelectItem( (String)rw.getAttribute("Name"), (String)rw.getAttribute("Name"))); } } else{ SelectItem a = new SelectItem("","Type in two or more characters..","",true); selectItems.add(a); } return selectItems; } So, what we are doing in the above method is, to check the length of the search term and if it is more than 1 (i.e 2 or more characters), the return the actual suggest list. Otherwise, create a read only select item new SelectItem("","Type in two or more characters..","",true); and add it to the list of suggested items to be displayed. The last parameter for the SelectItem (boolean) is to make it as readOnly, so that users would not be able to select this static message from the displayed list. Finally, bind this method to the input text's autosuggestbehavior's suggestedItems property. <af:inputText label="Country" id="it1"> <af:autoSuggestBehavior suggestedItems="#{AutoSuggestBean.onSuggest}"/> </af:inputText>

    Read the article

  • SQL Authority News – Presenting at SQL Bangalore on May 3, 2014 – Performing an Effective Presentation

    - by Pinal Dave
    SQL Bangalore is a wonderful community and we always have a great response when we present on technology. It is SQL User Group and we discuss everything SQL there. This month we have SQL Server 2014 theme and we are going to have a community launch on this subject. We have the best of the best speakers presenting on SQL Server 2014 technology. Looking at the whole line of celebrity speakers, I have decided not to present on SQL Server. I will be presenting on the performance tuning subject, but with the twist of soft skills. I will be presenting on “Performing an Effective Presentation“. Trust me, you do not want to miss this presentation, I will be presenting on how to present effectively when presenting SQL Server topics. What this session will NOT have I personally believe that we all are good presenters most of the time. We can all easily call out if someone is bad presenter. There is no point talking about basics like bigger bullet points, talk loudly, talk with confidence, use better analogies etc. In simple words – this is not going to some philosophy session and boring notes. What this session will have Well, this session will tell stories of my life. It will tell how we can present about technology and SQL Server with the help of stories and personal experience. I am going to tell stories about two legends  who have inspired me. Right after that we will be doing two exercises together where we will learn quickly and effectively, how to become better speaker – instantly! There is no video recording of this session. If you want to get resources from this session, please sign up my newsletter at http://bit.ly/sqllearn Here are few of the slides from this presentation: Here is the details about the event and location Venue:Microsoft Corporation, Signature Building,Embassy Golf Links Business Park, Intermediate Ring Road, Domlur, Bangalore – 560071 The agenda is amazing – we have top line SQL Speakers. Everyone is welcome and don’t forget to get your friend along for this event. Loads to learn and tons to share !!! Keynote (20 mins) by Anupam Tiwari – Business Program Manager – GTSC Backup Enhancements with SQL Server 2014 by Amit Banerjee – PFE Microsoft Performance Enhancements with SQL Server 2014 by Sourabh Agarwal - PFE Microsoft LUNCH BREAK Performing an effective Presentation by Pinal Dave – Community Member (SQLAuthority.com) InMemory Enhancements with SQL Server 2014 by Balmukund Lakhani – Support Escalation Engg. Microsoft Some more lesser known enhancements with SQL Server 2014 by Vinod Kumar – Technical Architect Microsoft MTC Power Packed – Power BI with SQL Server by Kane Conway – Support Escalation Engg. Microsoft I am very big fan of Amit, Balmukund and Vinod – I have always watched their session and this time, I am going to once again attend their session without missing a single min. They are SQL legends, I am going to be there and learn when they are sharing their knowledge.  Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL

    Read the article

  • WebLogic Server Performance and Tuning: Part II - Thread Management

    - by Gokhan Gungor
    WebLogic Server, like any other java application server, provides resources so that your applications use them to provide services. Unfortunately none of these resources are unlimited and they must be managed carefully. One of these resources is threads which are pooled to provide better throughput and performance along with the fast response time and to avoid deadlocks. Threads are execution points that WebLogic Server delivers its power and execute work. Managing threads is very important because it may affect the overall performance of the entire system. In previous releases of WebLogic Server 9.0 we had multiple execute queues and user defined thread pools. There were different queues for different type of work which had fixed number of execute threads.  Tuning of this thread pools and finding the proper number of threads was time consuming which required many trials. WebLogic Server 9.0 and the following releases use a single thread pool and a single priority-based execute queue. All type of work is executed in this single thread pool. Its size (thread count) is automatically decreased or increased (self-tuned). The new “self-tuning” system simplifies getting the proper number of threads and utilizing them.Work manager allows your applications to run concurrently in multiple threads. Work manager is a mechanism that allows you to manage and utilize threads and create rules/guidelines to follow when assigning requests to threads. We can set a scheduling guideline or priority a request with a work manager and then associate this work manager with one or more applications. At run-time, WebLogic Server uses these guidelines to assign pending work/requests to execution threads. The position of a request in the execute queue is determined by its priority. There is a default work manager that is provided. The default work manager should be sufficient for most applications. However there can be cases you want to change this default configuration. Your application(s) may be providing services that need mixture of fast response time and long running processes like batch updates. However wrong configuration of work managers can lead a performance penalty while expecting improvement.We can define/configure work managers at;•    Domain Level: config.xml•    Application Level: weblogic-application.xml •    Component Level: weblogic-ejb-jar.xml or weblogic.xml(For a specific web application use weblogic.xml)We can use the following predefined rules/constraints to manage the work;•    Fair Share Request Class: Specifies the average thread-use time required to process requests. The default is 50.•    Response Time Request Class: Specifies a response time goal in milliseconds.•    Context Request Class: Assigns request classes to requests based on context information.•    Min Threads Constraint: Limits the number of concurrent threads executing requests.•    Max Threads Constraint: Guarantees the number of threads the server will allocate to requests.•    Capacity Constraint: Causes the server to reject requests only when it has reached its capacity. Let’s create a work manager for our application for a long running work.Go to WebLogic console and select Environment | Work Managers from the domain structure tree. Click New button and select Work manager and click next. Enter the name for the work manager and click next. Then select the managed server instances(s) or clusters from available targets (the one that your long running application is deployed) and finish. Click on MyWorkManager, and open the Configuration tab and check Ignore Stuck Threads and save. This will prevent WebLogic to tread long running processes (that is taking more than a specified time) as stuck and enable to finish the process.

    Read the article

  • Using HTML5 Today part 3&ndash; Using Polyfills

    - by Steve Albers
    Shims helps when adding semantic tags to older IE browsers, but there is a huge range of other new HTML5 features that having varying support on browsers.  Polyfills are JavaScript code and/or browser plug-ins that can provide older or less featured browsers with API support.  The best polyfills will detect the whether the current browser has native support, and only adds the functionality if necessary.  The Douglas Crockford JSON2.js library is an example of this approach: if the browser already supports the JSON object, nothing changes.  If JSON is not available, the library adds a JSON property in the global object. This approach provides some big benefits: It lets you add great new HTML5 features to your web sites sooner. It lets the developer focus on writing to the up-and-coming standard rather than proprietary APIs. Where most one-off legacy code fixes tends to break down over time, well done polyfills will stop executing over time (as customer browsers natively support the feature) meaning polyfill code may not need to be tested against new browsers since they will execute the native methods instead. Your should also remember that Polyfills represent an entirely separate code path (and sometimes plug-in) that requires testing for support.  Also Polyfills tend to run on older browsers, which often have slower JavaScript performance.  As a result you might find that performance on older browsers is not comparable. When looking for Polyfills you can start by checking the Modernizr GitHub wiki or the HTML5 Please site. For an example of a polyfill consider a page that writes a few geometric shapes on a <canvas> <script src="jquery-1.7.1.min.js"><script> <script> $(document).ready(function () { drawCanvas(); }); function drawCanvas() { var context = $("canvas")[0].getContext('2d'); //background context.fillStyle = "#8B0000"; context.fillRect(5, 5, 300, 100); // emptybox context.strokeStyle = "#B0C4DE"; context.lineWidth = 4; context.strokeRect(20, 15, 80, 80); // circle context.arc(160, 55, 40, 0, Math.PI * 2, false); context.fillStyle = "#4B0082"; context.fill(); </script>   The result is a simple static canvas with a box & a circle:   …to enable this functionality on a pre-canvas browser we can find a polyfill.  A check on html5please.com references  FlashCanvas.  Pull down the zip and extract the files (flashcanvas.js, flash10canvas.swf, etc) to a directory on your site.  Then based on the documentation you need to add a single line to your original HTML file: <!--[if lt IE 9]><script src="flashcanvas.js"></script><![endif]—> …and you have canvas functionality!  The IE conditional comments ensure that the library is only loaded in browsers where it is useful, improving page load & processing time. Like all Polyfills, you should test to verify the functionality matches your expectations across browsers you need to support.  For instance the Flash Canvas home page advertises 70% support of HTML5 Canvas spec tests.

    Read the article

  • Wireless connection works but the internet is too slow to use in Ubuntu 11.04

    - by Garrin
    The internet is so slow as to be unusable. And I'm not being picky. Even after minutes I can't get my Google home page to load. I tried installing a package through apt-get and was getting rates between 0 and a few hundred bytes/s. That's bytes, not kilobytes! Mostly 0 however (no exaggeration, it spends large amounts of time stalled). And I would go to a speed test web site of some kind but I can't since nothing will load. Briefly put, the laptop I am using was connected to two wireless networks while using Ubuntu 11.04 without any issues before this. It was also connected to a wired network without any issues. It dual boats Windows 7 which has never had any issues, not even with the current wireless network. Just to be clear, on the current wi-fi network, Windows 7 encounters no issues (speedtest.net puts the network speed at 1mb/s) but my network connection in Ubuntu 11.04 is so slow as to literally be unusable. I am unfamiliar with the router except for the fact that it boasts a Rogers logo (that's a large ISP/cable provider in Canada for those not familiar with the land of igloos and polar bears). I am far from the router and some desktop widget I use tells me the signal strength is at 58% (it seems fairly reliable and this would appear to match up with the filled bars in the network icon). I should also mention I'm just renting a room in this house so I'm not the network administrator and while I can access the 192.168.0.1 router page, the password wasn't set to 'password' so it's not much use to me. Here are a bunch of commands I ran which don't tell me a whole lot but I thought might be more instructive to the wise around here: lspci (just showing my network card): 05:00.0 Network controller: Atheros Communications Inc. AR928X Wireless Network Adapter (PCI-Express) (rev 01) This one is self explanatory. PING www.googele.com (216.65.41.185) 56(84) bytes of data. 64 bytes from nnw.net (216.65.41.185): icmp_req=1 ttl=51 time=267 ms 64 bytes from nnw.net (216.65.41.185): icmp_req=2 ttl=51 time=190 ms 64 bytes from nnw.net (216.65.41.185): icmp_req=3 ttl=51 time=212 ms 64 bytes from nnw.net (216.65.41.185): icmp_req=4 ttl=51 time=207 ms 64 bytes from nnw.net (216.65.41.185): icmp_req=5 ttl=51 time=220 ms --- www.googele.com ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4003ms rtt min/avg/max/mdev = 190.079/219.699/267.963/26.121 ms ifconfig eth0 Link encap:Ethernet HWaddr 20:6a:8a:02:20:da UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:42 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:16 errors:0 dropped:0 overruns:0 frame:0 TX packets:16 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:960 (960.0 B) TX bytes:960 (960.0 B) wlan0 Link encap:Ethernet HWaddr 20:7c:8f:05:c6:bf inet addr:192.168.0.16 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::227c:8fff:fe05:c6bf/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:982 errors:0 dropped:0 overruns:0 frame:0 TX packets:658 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:497250 (497.2 KB) TX bytes:95076 (95.0 KB) Thank you

    Read the article

  • How can I get the data from the json one by one by using javascript/jquery? [on hold]

    - by sandhus
    I have the working code which fetches all the records from the json, but how can I make it available one by one on the click of the button or link? The following code is working to fetch all the records: <!doctype html> <html> <head> <meta charset="utf-8"> <title>jQuery PHP Json Response</title> <style type="text/css"> div { text-align:center; padding:10px; } #msg { width: 500px; margin: 0px auto; } .members { width: 500px ; background-color: beige; } </style> </head> <body> <div id="msg"> <table id="userdata" border="1"> <thead> <th>Email</th> <th>Sex</th> <th>Location</th> <th>Picture</th> <th>audio</th> <th>video</th> </thead> <tbody></tbody> </table> </div> <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js"> </script> <script type="text/javascript"> $(document).ready(function(){ var url="json.php"; $("#userdata tbody").html(""); $.getJSON(url,function(data){ $.each(data.members, function(i,user){ var tblRow = "<tr>" +"<td>"+user.email+"</td>" +"<td>"+user.sex+"</td>" +"<td>"+user.location+"</td>" +"<td>"+"<img src="+user.image+">"+"</td>" +"<td>"+"<audio src="+user.video+" controls>"+"</td>" +"<td>"+"<video src="+user.video+" controls>"+"</td>" +"</tr>" ; $(tblRow).appendTo("#userdata tbody"); }); }); }); </script> </body> </html> I used the json_encode function in the php file to encode the sql db. How can i achieve this?

    Read the article

  • Disabling the right-click sub menu using JQuery

    - by nikolaosk
    Recently I needed to disable the right-click contextual menu in an HTML page for a very simple HTML application I was creating for a friend.This is going to be a short post where I will demonstrate how to disable the right-click contextual menu.I will use the very popular JQuery Library. Please download the library (minified version) from http://jquery.com/downloadPlease find here all my posts regarding JQuery.In this hands-on example I will be using Expression Web 4.0.This application is not a free application. You can use any HTML editor you like.You can use Visual Studio 2012 Express edition. You can download it here. I am going to create a very simple HTML 5 page with some text and an image. The HTML markup for the page follows. <!DOCTYPE html><html lang="en">  <head>    <title>HTML 5, CSS3 and JQuery</title>        <meta http-equiv="Content-Type" content="text/html;charset=utf-8" >    <link rel="stylesheet" type="text/css" href="style.css">     <script type="text/javascript" src="jquery-1.8.2.min.js">        </script><script type="text/javascript"> (function ($) { $(document).bind('contextmenu', function () { return false;}); })(jQuery); </script>       </head>  <body>      <div id="header">      <h1>Learn cutting edge technologies</h1>      <h2>HTML 5, JQuery, CSS3</h2>    </div>      <figure>  <img src="html5.png" alt="HTML 5"></figure>        <div id="main">          <h2>HTML 5</h2>                        <article>          <p>            HTML5 is the latest version of HTML and XHTML. The HTML standard defines a single language that can be written in HTML and XML. It attempts to solve issues found in previous iterations of HTML and addresses the needs of Web Applications, an area previously not adequately covered by HTML.          </p>          </article>      </div>             </body>  </html> This is the JQuery code, I use (function ($) { $(document).bind('contextmenu', function () { return false;}); })(jQuery); I simply disable/cancel the contextmenu event.When I load the simple page on the browser and I right-click the context menu does not appear.Hope it helps!!!

    Read the article

  • Is this spaghetti code already? [migrated]

    - by hephestos
    I post the following code writen all by hand. Why I have the feeling that it is a western spaghetti on its own. Second, could that be written better? <div id="form-board" class="notice" style="height: 200px; min-height: 109px; width: auto;display: none;"> <script type="text/javascript"> jQuery(document).ready(function(){ $(".form-button-slide").click(function(){ $( "#form-board" ).dialog(); return false; }); }); </script> <?php echo $this->Form->create('mysubmit'); echo $this->Form->input('inputs', array('type' => 'select', 'id' => 'inputs', 'options' => $inputs)); echo $this->Form->input('Fields', array('type' => 'select', 'id' => 'fields', 'empty' => '-- Pick a state first --')); echo $this->Form->input('inputs2', array('type' => 'select', 'id' => 'inputs2', 'options' => $inputs2)); echo $this->Form->input('Fields2', array('type' => 'select', 'id' => 'fields2', 'empty' => '-- Pick a state first --')); echo $this->Form->end("Submit"); ?> </div> <div style="width:100%"></div> <div class="form-button-slide" style="float:left;display:block;"> <?php echo $this->Html->link("Error Results", "#"); ?> </div> <script type="text/javascript"> jQuery(document).ready(function(){ $("#mysubmitIndexForm").submit(function() { // we want to store the values from the form input box, then send via ajax below jQuery.post("Staffs/view", { data1: $("#inputs").attr('value'), data2:$("#inputs2").attr('value'),data3:$("#fields").attr('value'), data4:$("#fields2").attr('value') } ); //Close the dialog $( "#form-board" ).dialog('close') return false; }); $("#inputs").change(function() { // we want to store the values from the form input box, then send via ajax below var input_id = $('#inputs').attr('value'); $.ajax({ type: "POST", //The controller who listens to our request url: "Inputs/getFieldsFromOneInput/"+input_id, data: "input_id="+ input_id, //+"&amp; lname="+ lname, success: function(data){//function on success with returned data $('form#mysubmit').hide(function(){}); data = $.parseJSON(data); var sel = $("#fields"); sel.empty(); for (var i=0; i<data.length; i++) { sel.append('<option value="' + data[i].id + '">' + data[i].name + '</option>'); } } }); return false; }); $("#inputs2").change(function() { // we want to store the values from the form input box, then send via ajax below var input_id = $('#inputs2').attr('value'); $.ajax({ type: "POST", //The controller who listens to our request url: "Inputs/getFieldsFromOneInput/"+input_id, data: "input_id="+ input_id, //+"&amp; lname="+ lname, success: function(data){//function on success with returned data $('form#mysubmit').hide(function(){}); data = $.parseJSON(data); var sel = $("#fields2"); sel.empty(); for (var i=0; i<data.length; i++) { sel.append('<option value="' + data[i].id + '">' + data[i].name + '</option>'); } } }); return false; }); }); </script>

    Read the article

  • sp_send_dbmail attach files stored as varbinary in database

    - by Mindstorm Interactive
    I have a two part question relating to sending query results as attachments using sp_send_dbmail. Problem 1: Only basic .txt files will open. Any other format like .pdf or .jpg are corrupted. Problem 2: When attempting to send multiple attachments, I receive one file with all file names glued together. I'm running SQL Server 2005 and I have a table storing uploaded documents: CREATE TABLE [dbo].[EmailAttachment]( [EmailAttachmentID] [int] IDENTITY(1,1) NOT NULL, [MassEmailID] [int] NULL, -- foreign key [FileData] [varbinary](max) NOT NULL, [FileName] [varchar](100) NOT NULL, [MimeType] [varchar](100) NOT NULL I also have a MassEmail table with standard email stuff. Here is the SQL Send Mail script. For brevity, I've excluded declare statements. while ( (select count(MassEmailID) from MassEmail where status = 20 )>0) begin select @MassEmailID = Min(MassEmailID) from MassEmail where status = 20 select @Subject = [Subject] from MassEmail where MassEmailID = @MassEmailID select @Body = Body from MassEmail where MassEmailID = @MassEmailID set @query = 'set nocount on; select cast(FileData as varchar(max)) from Mydatabase.dbo.EmailAttachment where MassEmailID = '+ CAST(@MassEmailID as varchar(100)) select @filename = '' select @filename = COALESCE(@filename+ ',', '') +FileName from EmailAttachment where MassEmailID = @MassEmailID exec msdb.dbo.sp_send_dbmail @profile_name = 'MASS_EMAIL', @recipients = '[email protected]', @subject = @Subject, @body =@Body, @body_format ='HTML', @query = @query, @query_attachment_filename = @filename, @attach_query_result_as_file = 1, @query_result_separator = '; ', @query_no_truncate = 1, @query_result_header = 0; update MassEmailset status= 30,SendDate = GetDate() where MassEmailID = @MassEmailID end I am able to successfully read files from the database so I know the binary data is not corrupted. .txt files only read when I cast FilaData to varchar. But clearly original headers are lost. It's also worth noting that attachment file sizes are different than the original files. That is most likely due to improper encoding as well. So I'm hoping there's a way to create file headers using the stored mimetype, or some way to include file headers in the binary data? I'm also not confident in the values of the last few parameters, and I know coalesce is not quite right, because it prepends the first file name with a comma. But good documentation is nearly impossible to find. Please help!

    Read the article

  • bulk insert and update with ADO.NET Entity Framework

    - by Keith Barrows
    I am writing a small application that does a lot of feed processing. I want to use LINQ EF for this as speed is not an issue, it is a single user app and, in the end, will only be used once a month. My questions revolves around the best way to do bulk inserts using LINQ EF. After parsing the incoming data stream I end up with a List of values. Since the end user may end up trying to import some duplicate data I would like to "clean" the data during insert rather than reading all the records, doing a for loop, rejecting records, then finally importing the remainder. This is what I am currently doing: DateTime minDate = dataTransferObject.Min(c => c.DoorOpen); DateTime maxDate = dataTransferObject.Max(c => c.DoorOpen); using (LabUseEntities myEntities = new LabUseEntities()) { var recCheck = myEntities.ImportDoorAccess.Where(a => a.DoorOpen >= minDate && a.DoorOpen <= maxDate).ToList(); if (recCheck.Count > 0) { foreach (ImportDoorAccess ida in recCheck) { DoorAudit da = dataTransferObject.Where(a => a.DoorOpen == ida.DoorOpen && a.CardNumber == ida.CardNumber).First(); if (da != null) da.DoInsert = false; } } ImportDoorAccess newIDA; foreach (DoorAudit newDoorAudit in dataTransferObject) { if (newDoorAudit.DoInsert) { newIDA = new ImportDoorAccess { CardNumber = newDoorAudit.CardNumber, Door = newDoorAudit.Door, DoorOpen = newDoorAudit.DoorOpen, Imported = newDoorAudit.Imported, RawData = newDoorAudit.RawData, UserName = newDoorAudit.UserName }; myEntities.AddToImportDoorAccess(newIDA); } } myEntities.SaveChanges(); } I am also getting this error: System.Data.UpdateException was unhandled Message="Unable to update the EntitySet 'ImportDoorAccess' because it has a DefiningQuery and no element exists in the element to support the current operation." Source="System.Data.SqlServerCe.Entity" What am I doing wrong? Any pointers are welcome.

    Read the article

  • WPF / C#: Transforming coordinates from an image control to the image source

    - by Gabriel
    I'm trying to learn WPF, so here's a simple question, I hope: I have a window that contains an Image element bound to a separate data object with user-configurable Stretch property <Image Name="imageCtrl" Source="{Binding MyImage}" Stretch="{Binding ImageStretch}" /> When the user moves the mouse over the image, I would like to determine the coordinates of the mouse with respect to the original image (before stretching/cropping that occurs when it is displayed in the control), and then do something with those coordinates (update the image). I know I can add an event-handler to the MouseMove event over the Image control, but I'm not sure how best to transform the coordinates: void imageCtrl_MouseMove(object sender, MouseEventArgs e) { Point locationInControl = e.GetPosition(imageCtrl); Point locationInImage = ??? updateImage(locationInImage); } Now I know I could compare the size of Source to the ActualSize of the control, and then switch on imageCtrl.Stretch to compute the scalars and offsets on X and Y, and do the transform myself. But WPF has all the information already, and this seems like functionality that might be built-in to the WPF libraries somewhere. So I'm wondering: is there a short and sweet solution? Or do I need to write this myself? EDIT I'm appending my current, not-so-short-and-sweet solution. Its not that bad, but I'd be somewhat suprised if WPF didn't provide this functionality automatically: Point ImgControlCoordsToPixelCoords(Point locInCtrl, double imgCtrlActualWidth, double imgCtrlActualHeight) { if (ImageStretch == Stretch.None) return locInCtrl; Size renderSize = new Size(imgCtrlActualWidth, imgCtrlActualHeight); Size sourceSize = bitmap.Size; double xZoom = renderSize.Width / sourceSize.Width; double yZoom = renderSize.Height / sourceSize.Height; if (ImageStretch == Stretch.Fill) return new Point(locInCtrl.X / xZoom, locInCtrl.Y / yZoom); double zoom; if (ImageStretch == Stretch.Uniform) zoom = Math.Min(xZoom, yZoom); else // (imageCtrl.Stretch == Stretch.UniformToFill) zoom = Math.Max(xZoom, yZoom); return new Point(locInCtrl.X / zoom, locInCtrl.Y / zoom); }

    Read the article

  • xcode linker error on iPhone app (Only on simulator)

    - by RexOnRoids
    Im getting this linker error that won't let me compile. It only happens on the simulator. KEY POINTS: - Happens only in simulator - Similar to THIS question, but found no FRAMEWORK_SEARCH_PATHS in my .pbxproj file - Though my OS is 10.6.2, I had to build target 1.5 to avoid other linker errors - libxml2.dylib IS required and is in my Frameworks group - The other cited libraries I have never heard of. - Tried bringing in those other Libs under frameworks, didn't solve. Build SpaceTweet of project SpaceTweet with configuration Debug Ld build/Debug-iphonesimulator/SpaceTweet.app/SpaceTweet normal i386 cd "/Users/Scott/Desktop/iPhone Dev/SpaceTweet(Experimental)" setenv MACOSX_DEPLOYMENT_TARGET 10.5 setenv PATH "/Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin:/Developer/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin" /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin/gcc-4.2 -arch i386 -isysroot /Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator3.1.3.sdk "-L/Users/Scott/Desktop/iPhone Dev/SpaceTweet(Experimental)/build/Debug-iphonesimulator" -L/Users/Scott/Desktop "-L/Users/Scott/Desktop/iPhone Dev/SpaceTweet(Experimental)/../../libYAJLIPhone-0" -L/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.0.sdk/usr/lib -L/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator3.1.3.sdk/usr/lib -L/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator3.0.sdk/usr/lib "-F/Users/Scott/Desktop/iPhone Dev/SpaceTweet(Experimental)/build/Debug-iphonesimulator" -filelist "/Users/Scott/Desktop/iPhone Dev/SpaceTweet(Experimental)/build/SpaceTweet.build/Debug-iphonesimulator/SpaceTweet.build/Objects-normal/i386/SpaceTweet.LinkFileList" -mmacosx-version-min=10.5 -framework Foundation -framework UIKit -framework CoreGraphics -framework AVFoundation -framework MessageUI -lYAJLIPhone -lxml2 -o "/Users/Scott/Desktop/iPhone Dev/SpaceTweet(Experimental)/build/Debug-iphonesimulator/SpaceTweet.app/SpaceTweet" ld: warning: in /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.0.sdk/usr/lib/libxml2.dylib, missing required architecture i386 in file ld: warning: in /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.0.sdk/usr/lib/libSystem.dylib, missing required architecture i386 in file ld: in /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.0.sdk/usr/lib/libobjc.A.dylib, missing required architecture i386 in file collect2: ld returned 1 exit status Command /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin/gcc-4.2 failed with exit code 1 CLUE: Again, MY question is very similar to THIS SOLVED QUESTION except that in my case I did NOT find a FRAMEWORK_SEARCH_PATHS entry in the .pbxproj file in my project bundle and thus could not solve in the manner in which that question was solved.

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >