Search Results

Search found 65244 results on 2610 pages for 'jim work'.

Page 227/2610 | < Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >

  • How can I get bitfields to arrange my bits in the right order?

    - by Jim Hunziker
    To begin with, the application in question is always going to be on the same processor, and the compiler is always gcc, so I'm not concerned about bitfields not being portable. gcc lays out bitfields such that the first listed field corresponds to least significant bit of a byte. So the following structure, with a=0, b=1, c=1, d=1, you get a byte of value e0. struct Bits { unsigned int a:5; unsigned int b:1; unsigned int c:1; unsigned int d:1; } __attribute__((__packed__)); (Actually, this is C++, so I'm talking about g++.) Now let's say I'd like a to be a six bit integer. Now, I can see why this won't work, but I coded the following structure: struct Bits2 { unsigned int a:6; unsigned int b:1; unsigned int c:1; unsigned int d:1; } __attribute__((__packed__)); Setting b, c, and d to 1, and a to 0 results in the following two bytes: c0 01 This isn't what I wanted. I was hoping to see this: e0 00 Is there any way to specify a structure that has three bits in the most significant bits of the first byte and six bits spanning the five least significant bits of the first byte and the most significant bit of the second? Please be aware that I have no control over where these bits are supposed to be laid out: it's a layout of bits that are defined by someone else's interface.

    Read the article

  • Creating a property setter delegate

    - by Jim C
    I have created methods for converting a property lambda to a delegate: public static Delegate MakeGetter<T>(Expression<Func<T>> propertyLambda) { var result = Expression.Lambda(propertyLambda.Body).Compile(); return result; } public static Delegate MakeSetter<T>(Expression<Action<T>> propertyLambda) { var result = Expression.Lambda(propertyLambda.Body).Compile(); return result; } These work: Delegate getter = MakeGetter(() => SomeClass.SomeProperty); object o = getter.DynamicInvoke(); Delegate getter = MakeGetter(() => someObject.SomeProperty); object o = getter.DynamicInvoke(); but these won't compile: Delegate setter = MakeSetter(() => SomeClass.SomeProperty); setter.DynamicInvoke(new object[]{propValue}); Delegate setter = MakeSetter(() => someObject.SomeProperty); setter.DynamicInvoke(new object[]{propValue}); The MakeSetter lines fail with "The type arguments cannot be inferred from the usage. Try specifying the type arguments explicitly." Is what I'm trying to do possible? Thanks in advance.

    Read the article

  • How do I unit test controllers for an asp.net mvc site that uses StructureMap and NHibernate?

    - by Jim Geurts
    I have an asp.net mvc2 application that is using StructureMap 2.6 and NHibernate 3.x. I would like to add unit tests to the application but am sort of at a loss for how to accomplish it. Say I have a basic controller called Posts that has an action called Index. The controller looks something like: public class PostsController : Controller { private readonly IPostService _postService; public PostsController(IPostService postService) { _postService = postService; } public ActionResult Index() { return View(_postService.QueryOver<Post>().Future()); } } If I wanted to create an nunit test that would verify that the index action is returning all of the posts, how do I go about that? If mocking is recommended, do you just assume that interaction with the database will work? Sorry for asking such a broad question, but my web searches haven't turned up anything decent for how to unit test asp.net mvc actions that use StructureMap (or any other IOC) and NHibernate. btw, if you don't like that I return a QueryOver object from my post service, pretend it is an IQueryable object. I'm using it essentially in the same way.

    Read the article

  • How can I automatically elevate a COM interface used for automation?

    - by Jim Flood
    I have a Windows service built with ATL to expose a LocalServer32 COM interface for a set of admin commands used for configuring the service, and these can be used from VBScript for example: Set myObj = WScript.CreateObject("MySvc.Administrator") myObj.DoSomething() I want DoSomething to run elevated, and I would like the UAC prompt to come up automatically when this is called by the VBScript. Is this possible? I know I can run the script in an elevated command shell, and that I can use objShell.ShellExecute WScript.FullName, Chr(34) & WScript.ScriptFullName & Chr(34), vbNullString, "runas" for example, to run the VBScript itself elevated, and either of those work fine -- the COM method finds itself elevated. However, AFAIK getting an elevated Explorer window on the desktop is convoluted (it's not as simple as right-clicking Start/Accessories/Windows Explorer/Run as Administrator, which doesn't actually elevate.) I want a user in the local admin group to be able to drag-and-drop files and folders onto the script, and then have the script call the admin COM interface with those pathnames as arguments. (And I am hoping for something simpler than monkeying around with the args and using ShellExecute "runas".) I've tried setting UAC Execution Level to requireAdministrator in the service EXE's manifest, and setting Elevated/Enabled = 1 and LocalizedString in the registry for the MySvc.Administrator class, and these don't do the trick.

    Read the article

  • How to syntax-highlight XML in CDATA elements in Vim?

    - by Jim Hurne
    Vim's syntax highlighting for XML/XSL is great, except it turns off all syntax highlighting in CDATA regions. Is there a way to turn on syntax highlighting on in CDATA regions? At work, we have a lot of XSL code embedded within other XML documents. It would be great if I could get all of the goodness of XML editing for the embedded XSL code as well without having to temporarily remove the CDATA tags, or copy the CDATA content into a temporary file. Example: <root> <someTag><![CDATA[ <xsl:template match="/"> <!-- XSL content here --> </xsl:template> ]]> </someTag> </root> Note that the name of the tag (in the example, someTag) containing the content could be anything. We also sometimes embed Javascript inside CDATA regions as well, and again, it would be nice to turn on Javascript syntax highlighting for those regions. Again, the tag the data is embedded in is usually arbitrary and can be anything.

    Read the article

  • Using an embedded Word document to create a new instance of that document.

    - by jim
    For a variety of reasons that are immutable ... I have a Word document which contains a VBA application (the 'app document') which creates a new document based on another document (the 'template') which contains the framework for the new document. I want to embed the 'template' into the 'app document' so that I deliver one file and I know I am using the correct version of the 'template'. I have, so far, embedded the 'template' file into the 'app document' and can find it by looping through "ThisDocument.InlineShapes", looking at .Field.OleFormat.IconLabel to find the 'template' by its name. The inlineShape.Field.OleFormat.Object is the 'template' document itself, and I can .Activate it, which causes it to appear as a regular document. I try to do SaveAs, and it does in fact save the file as the name I give it, however, that saved-as file is not left open, just the embedded file. I can not .Activate the file and just save it, then open the saved file, but that seems more work than necessary. So ... is the way I am doing this "the way", or I have missed some obvious practice? TIA

    Read the article

  • Generating jquery 'rules' from business model to UI in asp.net mvc

    - by jim
    Hi all, I've had a good look around and am certain that there's no matching question on SO, so here goes. Has anyone created a 'helper' method on their model that generates jquery (or plain javascript) rules validation dynamically, based on the criteria/rules that are contained within the object and taken from a repository (i.e. DB). What i'm thinking of is a discrete set of partial views (and associated models) that have rules at the business logic 'level' and rather than (or in combination with) validating the rule(s) at postback, translating the same rules into tightly focussed jquery methods that work identically at client (js) and server (c#) levels. I can see benefits here re performance. Also, the rules definitions could be created in a single place (in c#) and the jquery generated off of that, thus allowing single edits to update both code streams. I appreciate that there would be limitations imposed by language specific contstraints but the general principle could be quite interesting if used appropriately. I'm also aware that testibility could be an issue when using two different language structures and hoping to achieve similar test outcomes - but those aside... any thoughts or experiences of similar out there?? cheers jimi

    Read the article

  • User submitted content filtering

    - by Jim
    Hey all, Does anyone have any ideas on what could be used as a way to filter untrustworthy user submitted content? Take Yelp for instance, they would need to prevent competitors writing business reviews on their competitors. They would need to prevent business owners favourably reviewing their own business, or forcing friends/family to do so. They would need to prevent poor quality reviews from affecting a businesses rating and so on. I can't think what they might use to do this: Prevent multiple users from the same IP reviewing certain things Prevent business owners reviewing their own business (maybe even other businesses in the same categories as their own?) Somehow determine what a review is about and what the actual intentions behind it are Other than the first and second points, I can't think of any clever/easy way to filter potentially harmful reviews from being made available, other than a human doing it. Obviously for a site the size of Yelp this wouldn't be feasible, so what parameters could they take into consideration? Even with human intervention, how would anyone know it was the owners best buddy writing a fake review without knowing the people? I'm using this as an example in a larger study on the subject of filtering user content automatically. Does anyone have any ideas how these systems may work and what they take into consideration? Thanks!

    Read the article

  • mutableCopyWithZone updating a property value.

    - by Jim
    I have a Class that I need to copy with the ability to make changes the value of a variable on both Classes. Simply put the classes need to remain clones of each other at all times. My understanding of the documentation is that I can do this using a shallow copy of the Class which has also been declared mutable. By shallow copying the pointer value for the variable will be cloned so that it is an exact match in both classes. So when I update the variable in the original the copy will be updated simultaneously. Is this right? As you can see below I have used mutableCopyWithZone in the class I want to copy. I have tried both NSCopyObject and allocWithZone methods to get this to work. Although I'm able to copy the class and it appears as intended, when updating the variable it is not changing value in the copied Class. - (id)mutableCopyWithZone:(NSZone *)zone { //ReviewViewer *copy = NSCopyObject(self, 0, zone); ReviewViewer *copy = [[[self class] allocWithZone:zone] init]; copy->infoTextViews = [infoTextViews copy]; return copy; } infoTextViews is a property declared as nonatomic, retain in the header file of the class being copied. I have also implemented the NSMutableCopying protocol accordingly. Any help would be great.

    Read the article

  • Professional Scrum Developer (.NET) Training in London

    - by Martin Hinshelwood
    On the 26th - 30th July in Microsoft’s offices in London Adam Cogan from SSW will be presenting the first Professional Scrum Developer course in the UK. I will be teaching this course along side Adam and it is a fantastic experience. You are split into teams and go head-to-head to deliver units of potentially shippable work in four two hour sprints. The Professional Scrum Developer course is the only course endorsed by both Microsoft and Ken Schwaber and they have worked together very effectively in brining this course to fruition. This course is the brain child of Richard Hundhausen, a Microsoft Regional Director, and both Adam and I attending the Trainer Prep in Sydney when he was there earlier this year. He is a fantastic trainer and no matter where you do this course you can be safe in the knowledge that he has trained and vetted all of the teachers. A tools version of Ken if you will Find a course and register Download this syllabus Download the Scrum Guide What is the Professional Scrum Developer course all about? Professional Scrum Developer course is a unique and intensive five-day experience for software developers. The course guides teams on how to turn product requirements into potentially shippable increments of software using the Scrum framework, Visual Studio 2010, and modern software engineering practices. Attendees will work in self-organizing, self-managing teams using a common instance of Team Foundation Server 2010. Who should attend this course? This course is suitable for any member of a software development team – architect, programmer, database developer, tester, etc. Entire teams are encouraged to attend and experience the course together, but individuals are welcome too. Attendees will self-organize to form cross-functional Scrum teams. These teams require an aggregate of skills specific to the selected case study. Please see the last page of this document for specific details. Product Owners, ScrumMasters, and other stakeholders are welcome too, but keep in mind that everyone who attends will be expected to commit to work and pull their weight on a Scrum team. What should you know by the end of the course? Scrum will be experienced through a combination of lecture, demonstration, discussion, and hands-on exercises. Attendees will learn how to do Scrum correctly while being coached and critiqued by the instructor, in the following topic areas: Form effective teams Explore and understand legacy “Brownfield” architecture Define quality attributes, acceptance criteria, and “done” Create automated builds How to handle software hotfixes Verify that bugs are identified and eliminated Plan releases and sprints Estimate product backlog items Create and manage a sprint backlog Hold an effective sprint review Improve your process by using retrospectives Use emergent architecture to avoid technical debt Use Test Driven Development as a design tool Setup and leverage continuous integration Use Test Impact Analysis to decrease testing times Manage SQL Server development in an Agile way Use .NET and T-SQL refactoring effectively Build, deploy, and test SQL Server databases Create and manage test plans and cases Create, run, record, and play back manual tests Setup a branching strategy and branch code Write more maintainable code Identify and eliminate people and process dysfunctions Inspect and improve your team’s software development process What does the week look like? This course is a mix of lecture, demonstration, group discussion, simulation, and hands-on software development. The bulk of the course will be spent working as a team on a case study application delivering increments of new functionality in mini-sprints. Here is the week at a glance: Monday morning and most of the day Friday will be spent with the computers powered off, so you can focus on sharpening your game of Scrum and avoiding the common pitfalls when implementing it. The Sprints Timeboxing is a critical concept in Scrum as well as in this course. We expect each team and student to understand and obey all of the timeboxes. The timebox duration will always be clearly displayed during each activity. Expect the instructor to enforce it. Each of the ½ day sprints will roughly follow this schedule: Component Description Minutes Instruction Presentation and demonstration of new and relevant tools & practices 60 Sprint planning meeting Product owner presents backlog; each team commits to delivering functionality 10 Sprint planning meeting Each team determines how to build the functionality 10 The Sprint The team self-organizes and self-manages to complete their tasks 120 Sprint Review meeting Each team will present their increment of functionality to the other teams = 30 Sprint Retrospective A group retrospective meeting will be held to inspect and adapt 10 Each team is expected to self-organize and manage their own work during the sprint. Pairing is highly encouraged. The instructor/product owner will be available if there are questions or impediments, but will be hands-off by default. You should be prepared to communicate and work with your team members in order to achieve your sprint goal. If you have development-related questions or get stuck, your partner or team should be your first level of support. Module 1: INTRODUCTION This module provides a chance for the attendees to get to know the instructors as well as each other. The Professional Scrum Developer program, as well as the day by day agenda, will be explained. Finally, the Scrum team will be selected and assembled so that the forming, storming, norming, and performing can begin. Trainer and student introductions Professional Scrum Developer program Agenda Logistics Team formation Retrospective Module 2: SCRUMDAMENTALS This module provides a level-setting understanding of the Scrum framework including the roles, timeboxes, and artifacts. The team will then experience Scrum firsthand by simulating a multi-day sprint of product development, including planning, review, and retrospective meetings. Scrum overview Scrum roles Scrum timeboxes (ceremonies) Scrum artifacts Simulation Retrospective It’s required that you read Ken Schwaber’s Scrum Guide in preparation for this module and course. MODULE 3: IMPLEMENTING SCRUM IN VISUAL STUDIO 2010 This module demonstrates how to implement Scrum in Visual Studio 2010 using a Scrum process template*. The team will learn the mapping between the Scrum concepts and how they are implemented in the tool. After connecting to the shared Team Foundation Server, the team members will then return to the simulation – this time using Visual Studio to manage their product development. Mapping Scrum to Visual Studio 2010 User Story work items Task work items Bug work items Demonstration Simulation Retrospective Module 4: THE CASE STUDY In this module the team is introduced to their problem domain for the week. A kickoff meeting by the Product Owner (the instructor) will set the stage for the why and what that will take during the upcoming sprints. The team will then define the quality attributes of the project and their definition of “done.” The legacy application code will be downloaded, built, and explored, so that any bugs can be discovered and reported. Introduction to the case study Download the source code, build, and explore the application Define the quality attributes for the project Define “done” How to file effective bugs in Visual Studio 2010 Retrospective Module 5: HOTFIX This module drops the team directly into a Brownfield (legacy) experience by forcing them to analyze the existing application’s architecture and code in order to locate and fix the Product Owner’s high-priority bug(s). The team will learn best practices around finding, testing, fixing, validating, and closing a bug. How to use Architecture Explorer to visualize and explore Create a unit test to validate the existence of a bug Find and fix the bug Validate and close the bug Retrospective Module 6: PLANNING This short module introduces the team to release and sprint planning within Visual Studio 2010. The team will define and capture their goals as well as other important planning information. Release vs. Sprint planning Release planning and the Product Backlog Product Backlog prioritization Acceptance criteria and tests Sprint planning and the Sprint Backlog Creating and linking Sprint tasks Retrospective At this point the team will have the knowledge of Scrum, Visual Studio 2010, and the case study application to begin developing increments of potentially shippable functionality that meet their definition of done. Module 7: EMERGENT ARCHITECTURE This module introduces the architectural practices and tools a team can use to develop a valid design on which to develop new functionality. The teams will learn how Scrum supports good architecture and design practices. After the discussion, the teams will be presented with the product owner’s prioritized backlog so that they may select and commit to the functionality they can deliver in this sprint. Architecture and Scrum Emergent architecture Principles, patterns, and practices Visual Studio 2010 modeling tools UML and layer diagrams SPRINT 1 Retrospective Module 8: TEST DRIVEN DEVELOPMENT This module introduces Test Driven Development as a design tool and how to implement it using Visual Studio 2010. To maximize productivity and quality, a Scrum team should setup Continuous Integration to regularly build every team member’s code changes and run regression tests. Refactoring will also be defined and demonstrated in combination with Visual Studio’s Test Impact Analysis to efficiently re-run just those tests which were impacted by refactoring. Continuous integration Team Foundation Build Test Driven Development (TDD) Refactoring Test Impact Analysis SPRINT 2 Retrospective Module 9: AGILE DATABASE DEVELOPMENT This module lets the SQL Server database developers in on a little secret – they can be agile too. By using the database projects in Visual Studio 2010, the database developers can join the rest of the team. The students will see how to apply Agile database techniques within Visual Studio to support the SQL Server 2005/2008/2008R2 development lifecycle. Agile database development Visual Studio database projects Importing schema and scripts Building and deploying Generating data Unit testing SPRINT 3 Retrospective Module 10: SHIP IT Teams need to know that just because they like the functionality doesn’t mean the Product Owner will. This module revisits acceptance criteria as it pertains to acceptance testing. By refining acceptance criteria into manual test steps, team members can execute the tests, recording the results and reporting bugs in a number of ways. Manual tests will be defined and executed using the Microsoft Test Manager tool. As the Sprint completes and an increment of functionality is delivered, the team will also learn why and when they should create a branch of the codeline. Acceptance criteria Testing in Visual Studio 2010 Microsoft Test Manager Writing and running manual tests Branching SPRINT 4 Retrospective Module 11: OVERCOMING DYSFUNCTION This module introduces the many types of people, process, and tool dysfunctions that teams face in the real world. Many dysfunctions and scenarios will be identified, along with ideas and discussion for how a team might mitigate them. This module will enable you and your team to move toward independence and improve your game of Scrum when you depart class. Scrum-butts and flaccid Scrum Best practices working as a team Team challenges ScrumMaster challenges Product Owner challenges Stakeholder challenges Course Retrospective What will be expected of you and you team? This is a unique course in that it’s technically-focused, team-based, and employs timeboxes. It demands that the members of the teams self-organize and self-manage their own work to collaboratively develop increments of software. All attendees must commit to: Pay attention to all lectures and demonstrations Participate in team and group discussions Work collaboratively with other team members Obey the timebox for each activity Commit to work and do your best to deliver All teams should have these skills: Understanding of Scrum Familiarity with Visual Studio 201 C#, .NET 4.0 & ASP.NET 4.0 experience*  SQL Server 2008 development experience Software testing experience * Check with the instructor ahead of time for the exact technologies Self-organising teams Another unique attribute of this course is that it’s a technical training class being delivered to teams of developers, not pairs, and not individuals. Ideally, your actual software development team will attend the training to ensure that all necessary skills are covered. However, if you wish to attend an open enrolment course alone or with just a couple of colleagues, realize that you may be placed on a team with other attendees. The instructor will do his or her best to ensure that each team is cross-functional to tackle the case study, but there are no guarantees. You may be required to try a new role, learn a new skill, or pair with somebody unfamiliar to you. This is just good Scrum! Who should NOT take this course? Because of the nature of this course, as explained above, certain types of people should probably not attend this course: Students requiring command and control style instruction – there are no prescriptive/step-by-step (think traditional Microsoft Learning) labs in this course Students who are unwilling to work within a timebox Students who are unwilling to work collaboratively on a team Students who don’t have any skill in any of the software development disciplines Students who are unable to commit fully to their team – not only will this diminish the student’s learning experience, but it will also impact their team’s learning experience Find a course and register Download this syllabus Download the Scrum Guide Technorati Tags: Scrum,SSW,Pro Scrum Dev

    Read the article

  • ADO.NET Data Services Entity Framework request error when property setter is internal

    - by Jim Straatman
    I receive an error message when exposing an ADO.NET Data Service using an Entity Framework data model that contains an entity (called "Case") with an internal setter on a property. If I modify the setter to be public (using the entity designer), the data services works fine. I don’t need the entity "Case" exposed in the data service, so I tried to limit which entities are exposed using SetEntitySetAccessRule. This didn’t work, and service end point fails with the same error. public static void InitializeService(IDataServiceConfiguration config) { config.SetEntitySetAccessRule("User", EntitySetRights.AllRead); } The error message is reported in a browser when the .svc endpoint is called. It is very generic, and reads “Request Error. The server encountered an error processing the request. See server logs for more details.” Unfortunately, there are no entries in the System and Application event logs. I found this stackoverflow question that shows how to configure tracing on the service. After doing so, the following NullReferenceExceptoin error was reported in the trace log. Does anyone know how to avoid this exception when including an entity with an internal setter? Blockquote 131076 3 0 2 MOTOJIM http://msdn.microsoft.com/en-US/library/System.ServiceModel.Diagnostics.TraceHandledException.aspx Handling an exception. 685a2910-19-128703978432492675 System.NullReferenceException, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Object reference not set to an instance of an object. at System.Data.Services.Providers.ObjectContextServiceProvider.PopulateMemberMetadata(ResourceType resourceType, MetadataWorkspace workspace, IDictionary2 entitySets, IDictionary2 knownTypes) at System.Data.Services.Providers.ObjectContextServiceProvider.PopulateMetadata(IDictionary2 knownTypes, IDictionary2 entitySets) at System.Data.Services.Providers.BaseServiceProvider.PopulateMetadata() at System.Data.Services.DataService1.CreateProvider(Type dataServiceType, Object dataSourceInstance, DataServiceConfiguration&amp; configuration) at System.Data.Services.DataService1.EnsureProviderAndConfigForRequest() at System.Data.Services.DataService1.ProcessRequestForMessage(Stream messageBody) at SyncInvokeProcessRequestForMessage(Object , Object[] , Object[] ) at System.ServiceModel.Dispatcher.SyncMethodInvoker.Invoke(Object instance, Object[] inputs, Object[]&amp; outputs) at System.ServiceModel.Dispatcher.DispatchOperationRuntime.InvokeBegin(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage5(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage4(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage3(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage2(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage1(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.MessageRpc.Process(Boolean isOperationContextSet) </StackTrace> <ExceptionString>System.NullReferenceException: Object reference not set to an instance of an object. at System.Data.Services.Providers.ObjectContextServiceProvider.PopulateMemberMetadata(ResourceType resourceType, MetadataWorkspace workspace, IDictionary2 entitySets, IDictionary2 knownTypes) at System.Data.Services.Providers.ObjectContextServiceProvider.PopulateMetadata(IDictionary2 knownTypes, IDictionary2 entitySets) at System.Data.Services.Providers.BaseServiceProvider.P

    Read the article

  • How to avoid "source !=null" when using Code Contracts and Linq To Sql?

    - by Florian
    I have the following code using a normal data context which works great: var dc = new myDataContext(); Contract.Assume(dc.Cars!= null); var cars = (from c in dc.Cars where c.Owner = 'Jim' select c).ToList(); However when I convert the filter to an extension method like this: var dc = new myDataContext(); Contract.Assume(dc.Cars!= null); var cars = dc.Cars.WithOwner('Jim'); public static IQueryable<Car> WithOwner(IQueryable<Car> cars, string owner) { Contract.Requires(cars != null); return cars.Where(c => c.Owner = owner); } I get the following warning: warning : CodeContracts: requires unproven: source != null

    Read the article

  • Why does PostgresQL query performance drop over time, but restored when rebuilding index

    - by Jim Rush
    According to this page in the manual, indexes don't need to be maintained. However, we are running with a PostgresQL table that has a continuous rate of updates, deletes and inserts that over time (a few days) sees a significant query degradation. If we delete and recreate the index, query performance is restored. We are using out of the box settings. The table in our test is currently starting out empty and grows to half a million rows. It has a fairly large row (lots of text fields). We are search is based of an index, not the primary key (I've confirmed the index is being used, at least under normal conditions) The table is being used as a persistent store for a single process. Using PostgresQL on Windows with a Java client I'm willing to give up insert and update performance to keep up the query performance. We are considering rearchitecting the application so that data is spread across various dynamic tables in a manner that allows us to drop and rebuild indexes periodically without impacting the application. However, as always, there is a time crunch to get this to work and I suspect we are missing something basic in our configuration or usage. We have considered forcing vacuuming and rebuild to run at certain times, but I suspect the locking period for such an action would cause our query to block. This may be an option, but there are some real-time (windows of 3-5 seconds) implications that require other changes in our code. Additional information: Table and index CREATE TABLE icl_contacts ( id bigint NOT NULL, campaignfqname character varying(255) NOT NULL, currentstate character(16) NOT NULL, xmlscheduledtime character(23) NOT NULL, ... 25 or so other fields. Most of them fixed or varying character fiel ... CONSTRAINT icl_contacts_pkey PRIMARY KEY (id) ) WITH (OIDS=FALSE); ALTER TABLE icl_contacts OWNER TO postgres; CREATE INDEX icl_contacts_idx ON icl_contacts USING btree (xmlscheduledtime, currentstate, campaignfqname); Analyze: Limit (cost=0.00..3792.10 rows=750 width=32) (actual time=48.922..59.601 rows=750 loops=1) - Index Scan using icl_contacts_idx on icl_contacts (cost=0.00..934580.47 rows=184841 width=32) (actual time=48.909..55.961 rows=750 loops=1) Index Cond: ((xmlscheduledtime < '2010-05-20T13:00:00.000'::bpchar) AND (currentstate = 'SCHEDULED'::bpchar) AND ((campaignfqname)::text = '.main.ee45692a-6113-43cb-9257-7b6bf65f0c3e'::text)) And, yes, I am aware there there are a variety of things we could do to normalize and improve the design of this table. Some of these options may be available to us. My focus in this question is about understanding how PostgresQL is managing the index and query over time (understand why, not just fix). If it were to be done over or significantly refactored, there would be a lot of changes.

    Read the article

  • What's the best way to count unique visitors with Hadoop?

    - by beagleguy
    hey all, just getting started on hadoop and curious what the best way in mapreduce would be to count unique visitors if your logfiles looked like this... DATE siteID action username 05-05-2010 siteA pageview jim 05-05-2010 siteB pageview tom 05-05-2010 siteA pageview jim 05-05-2010 siteB pageview bob 05-05-2010 siteA pageview mike and for each site you wanted to find out the unique visitors for each site? I was thinking the mapper would emit siteID \t username and the reducer would keep a set() of the unique usersnames per key and then emit the length of that set. However that would be potentially storing millions of usernames in memory which doesn't seem right. Anyone have a better way? I'm using python streaming by the way thanks

    Read the article

  • Update multiple rows with one query?

    - by kavoir.com
    I found something that works with updating one field at here: http://www.karlrixon.co.uk/articles/sql/update-multiple-rows-with-different-values-and-a-single-sql-query/ UPDATE person SET name = CASE id WHEN 1 THEN 'Jim' WHEN 2 THEN 'Mike' WHEN 3 THEN 'Precious' END WHERE id IN (1,2,3) My question is how to update more than one field? Such as: UPDATE person SET name = CASE, sex = CASE id WHEN 1 THEN 'Jim', 'female' WHEN 2 THEN 'Mike' 'male' WHEN 3 THEN 'Precious', 'male' END WHERE id IN (1,2,3) Which doesn't work of course. Tried a few other combination and failed. Any idea? Thanks!

    Read the article

  • Excel cell from one sheet to another sheet

    - by Eric
    Excel 07. I have two excel spreadsheets. Sheet1 has a few cells on it that I would like to be populated with a few cells from sheet2. This I want sheet1 to replicate this about 400 times for all employees in the agency. Here is the example. Sheet 1 Person name number1 number 2 number 3 Cell cell cell cell Sheet 2 Information sheet. Person name number1 Number2 number3 Jim 23 32 54 Sally 25 22 53 End result Sheet 3 Person name number1 number2 number3 Jim 23 32 54 Sheet 4 Person name number1 number2 number3 sally 25 22 53 Any help will be appreciated thank you.

    Read the article

  • Jquery to find a name on html page and add hyperlink

    - by mikejones12
    Here is my example: I have a a website that contains the following: <body> Jim Nebraska zipcode 65437 Tony lives in California his zipcode is 98708 </body> I would like to be able to search for zip codes on the page and wrap them with hyperlinks like: <body> Jim Nebraska zipcode <a href="/65437.htm">65437</a> Tony lives in California his zipcode is <a href="/65437.htm">98708</a> </body> Could I use a regex selector to find the string and then wrap the string, or replace it with the new hyperlink? I am new to Jquery and looking for someone to point me in the right direction. Thank you, Mike

    Read the article

  • Checking for Value in JS Object

    - by pagewil
    I have a JS Object like so: var ob{ 1 : { id:101, name:'john', email:'[email protected]' }, 2 : { id:102, name:'jim', email:'[email protected]' }, 3 : { id:103, name:'bob', email:'[email protected]' }, } I want to check if this JS object already contains a record. If it doesn't then I want to add it. For example. if(ob contains an id of 101){ //don't add john }else{ //add john } Catch my drift? Pretty simple question. I just want to know the best way of doing it. Thanks Guys! W.

    Read the article

  • sendmail and MX records when mail server is not on web host

    - by Jim Nelson
    This is a problem I'm sure is easy to fix, but I've been banging my head on it all day. I'm developing a new web site for a client. The web site resides at (this is an example) website.com. I have a PHP form script to email visitors' requests to [email protected]. When I coded this on a staging server on a different domain, all worked fine. When I moved it to website.com, the mail messages never arrived. The web server is on a virtual host with a major ISP. Here's what I've learned since then: My client's mail server is Microsoft Exchange on a box physically in their office. Whenever someone on the outside world emails [email protected], the mail arrives. But if the web server sends to the same email address, it fails every time. This is not a PHP problem. I secure shell in to the web server and have tested this both with sendmail and the UNIX mail application. I've also tested it by emailing various email accounts from the shell. I can email myself, for example, just nobody at the website.com domain. In short, when I'm logged in to website.com, mail to [email protected], [email protected], [email protected] all fail. All other addresses work fine. What I've discovered is those dropped emails are routed to the web server's "catchall" account where they sit in its inbox. I've done an MX lookup on website.com. The MX record points to mailsec.website.com. I can telnet to mailsec.website.com port 25 and see the SMTP server. It appears to me that website.com isn't doing an MX lookup when it's sending mail to [email protected]. My theory is that it recognizes the domain as local, sees that there's no "requests" user account to deliver it to, and drops the mail into the catchall account. What I want is to force sendmail to do the MX lookup and send the message on to the Exchange server. I'm at wit's end here. I can't figure out how to do this. For that matter, I may be way off base here and have misdiagnosed this entirely. Internet mail and MX has always seemed a black art to me, and my ignorance is certainly showing in this question.

    Read the article

  • max count with joins

    - by trixet
    I have 3 tables: users: Id Login 1 John 2 Bill 3 Jim computers: Id Name 1 Computer1 2 Computer2 3 Computer3 4 Computer4 5 Computer5 sessions: UserId ComputerId Minutes 1 2 47 2 1 32 1 4 15 2 5 5 1 2 7 1 1 40 2 5 31 I would like to display this resulting table: Login Total_sess Total_min Most_freq_computer Sess_on_most_freq Min_on_most_freq John 4 109 Computer2 2 54 Bill 3 68 Computer5 2 36 Jim - - - - - Myself I can only cover first 3 columns with: SELECT Login, COUNT(sessions.UserId), SUM(Minutes) FROM users LEFT JOIN sessions ON users.Id = sessions.UserId GROUP BY users.Id And some kind of other columns with: SELECT main.* FROM (SELECT UserId, ComputerId, COUNT(*) AS cnt ,SUM(Minutes) FROM sessions GROUP BY UserId, ComputerId) AS main INNER JOIN ( SELECT ComputerId, MAX(cnt) AS maxCnt FROM ( SELECT ComputerId, UserId, COUNT(*) AS cnt FROM sessions GROUP BY ComputerId, UserId ) AS Counts GROUP BY ComputerId) AS maxes ON main.ComputerId = maxes.ComputerId AND main.cnt = maxes.maxCnt But I need to get whole resulting table in one query. I feel I'm doing something completely wrong. Need help.

    Read the article

  • How can I use IndexOf to pick a specific Character when there are more than one of them?

    - by JimDel
    How can I use IndexOf with SubString to pick a specific Character when there are more than one of them? Here's my issue. I want to take the path "C:\Users\Jim\AppData\Local\Temp\" and remove the "Temp\" part. Leaving just "C:\Users\Jim\AppData\Local\" I have solved my problem with the code below but this assumes that the "Temp" folder is actually called "Temp". Is there a better way? Thanks if (Path.GetTempPath() != null) // Is it there?{ tempDir = Path.GetTempPath(); //Make a string out of it. int iLastPos = tempDir.LastIndexOf(@"\"); if (Directory.Exists(tempDir) && iLastPos > tempDir.IndexOf(@"\")) { // Take the position of the last "/" and subtract 4. // 4 is the lenghth of the word "temp". tempDir = tempDir.Substring(0, iLastPos - 4); }}

    Read the article

  • Determining if Memory Pointer is Valid - C++

    - by Jim Fell
    It has been my observation that if free( ptr ) is called where ptr is not a valid pointer to system-allocated memory, an access violation occurs. Let's say that I call free like this: LPVOID ptr = (LPVOID)0x12345678; free( ptr ); This will most definitely cause an access violation. Is there a way to test that the memory location pointed to by ptr is valid system-allocated memory? It seems to me that the the memory management part of the Windows OS kernel must know what memory has been allocated and what memory remains for allocation. Otherwise, how could it know if enough memory remains to satisfy a given request? (rhetorical) That said, it seems reasonable to conclude that there must be a function (or set of functions) that would allow a user to determine if a pointer is valid system-allocated memory. Perhaps Microsoft has not made these functions public. If Microsoft has not provided such an API, I can only presume that it was for an intentional and specific reason. Would providing such a hook into the system prose a significant threat to system security? Situation Report Although knowing whether a memory pointer is valid could be useful in many scenarios, this is my particular situation: I am writing a driver for a new piece of hardware that is to replace an existing piece of hardware that connects to the PC via USB. My mandate is to write the new driver such that calls to the existing API for the current driver will continue to work in the PC applications in which it is used. Thus the only required changes to existing applications is to load the appropriate driver DLL(s) at startup. The problem here is that the existing driver uses a callback to send received serial messages to the application; a pointer to allocated memory containing the message is passed from the driver to the application via the callback. It is then the responsibility of the application to call another driver API to free the memory by passing back the same pointer from the application to the driver. In this scenario the second API has no way to determine if the application has actually passed back a pointer to valid memory.

    Read the article

  • Possible to have empty values with Html For Helpers such as Html.TextBoxFor()?

    - by chobo2
    Hi Is it possible to have to make a html for helper in asp.net mvc 2.0 have a default value of "empty string" Like if I do this Html.TextBoxFor( u => u.ProductName) would render to <input id ="ProductName" name="ProdcutName" type="text" value="Jim" /> Now I don't want the textbox to display jim. I want it to display nothing. <input id ="ProductName" name="ProdcutName" type="text" value="" /> I tried to do this Html.TextBoxFor( u => u.ProductName, new { @value = " "}) but that seems to do nothing. So how can I do this. I hope you can do something like this otherwise I think these new helpers have a great flaw since now I need to use like javascript to remove them since I hardly ever want a default value in the textbox especially when I have a label right beside the textbox saying what it is.

    Read the article

  • How to sync two computers using new MobileMe calendar

    - by CesarGon
    I have been using MobileMe for over a year with success. I use it to sync my Outlook calendars in my work and home computers, using Windows 7 and Outlook 2007. The main Outlook calendar folder in my work computer is replicated to MobileMe as "Work", and synced to my home computer, and the main calendar folder in my home computer is replicated to MobileMe as "Home", and synced to my work computer. This means that I can see both "Work" and "Home" calendars from both computers (as well as from the web interface through me.com), which is very convenient. Yesterday I migrated to the new MobileMe calendar, accepting the suggestion that popped up on the me.com website. After the migration, the MobileMe control panel on each of Windows computers asked me to re-configure my calendar setup, and everything fell apart. The "Home" and "Work" calendar folders in Outlook are now ignored by MobileMe, and new ones named "Home in MobileMe" and "Work in MobileMe" have been created, and placed in a separate Outlook data file rather than the default. This means that now: I now have four folders, two of which are not replicated to MobileMe The two folders that are not replicated reside on a separate data file, so alarms and reminders don't work; they're basically useless to me as calendar folders In addition, the button in the MobileMe Control Panel that used to let me specify what MobileMe folder should be synced against the default Outlook folder has gone. MobileMe is now too smart. Do you have any idea how to undo this mess and go back to a situation where I have two folders, as described in the top paragraph, which keep synced? I don't want an extra data file. Thanks.

    Read the article

  • Games, Windows 8.1 and 144Hz display

    - by Marioysikax
    So I have been having problems with few games after switching from 7 to 8.1, which seems to be related to my 144Hz monitor. Few examples: Shank, Shank 2, Blood of the Werewolf, Astebreed, the Sims 2 and Rayman Legends patch 1.2Had few other as well but it's been long and I have 600+ steam library. From those games at least the Sims 2 and Shank worked without any problems with same setup and Windows 7. So basically these games simply refuse to launch with basic setup. However if I plug 60Hz TV with HDMI instead everything magically starts to work. As for Astebreed and the Sims 2 using windowed mode seems to also work. As for Rayman Demo and version 1.0 works for some reason and 1.2 breaks settings menu. I have already tried contacting supports. EA support stated game simply shouldn't work with 8.1 at all (which is lie as it works with that TV and friend with 8 plays just fine), ubisoft support took few weeks and support said he will forward info for further processing, blood of the werewolf support had no idea what's going on and told me to just use my TV instead.Changing monitors refresh rate to 120Hz or forcing it to 60Hz doesn't do anything. I have DVI right now but I will try with DisplayPort when I get the cable. At PCGW Garrett said it may have something to do how listing resolutions work with 8 compared to earlier Windows versions but my googling skills don't bring anything up and compatibility mode for earlier windows version doesn't work either (not that I expected that to work). My system specs are on my steam profile. How do I get those work with my 144Hz monitor as well as possible future games having same problem? Downgrading to 7 would work but is far from practical and I don't own legit lisence for that one.

    Read the article

< Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >