Search Results

Search found 37426 results on 1498 pages for 'simple talk editorial team'.

Page 525/1498 | < Previous Page | 521 522 523 524 525 526 527 528 529 530 531 532  | Next Page >

  • Enablement 2.0 Get Specialized!

    - by mseika
    Enablement 2.0 Get Specialized! Oracle PartnerNetwork Specialized program is releasing new certifications on our latest products, and partners are invited to be the first candidates.  Oracle Fusion Customer Relationship Management 11g Sales Essentials Exam (1Z0-456) – now in Production! All Beta exam participants will receive their exam scores as of September 24, 2012. The successful candidates will receive their certificates starting mid-October, 2012. Oracle Fusion Human Capital Management 11g Human Resources Essentials Exam (1Z0-584) – now in Production! All Beta exam participants will receive their exam scores as of September 27, 2012. The successful candidates will receive their certificates starting mid-October, 2012. Oracle Fusion Human Capital Management 11g Talent Management Essentials Exam (1Z0-585) – now in Production! All Beta exam participants will receive their exam scores as of September 27, 2012. The successful candidates will receive their certificates starting mid-October, 2012.  Contact UsPlease direct any inquiries you may have to Oracle Partner Enablement team at [email protected].

    Read the article

  • Enablement 2.0 Get Specialized!

    - by mseika
    Enablement 2.0 Get Specialized! Oracle PartnerNetwork Specialized program is releasing new certifications on our latest products, and partners are invited to be the first candidates. Oracle Fusion Customer Relationship Management 11g Sales Essentials Exam (1Z0-456) – now in Production! All Beta exam participants will receive their exam scores as of September 24, 2012. The successful candidates will receive their certificates starting mid-October, 2012. Oracle Fusion Human Capital Management 11g Human Resources Essentials Exam (1Z0-584) – now in Production! All Beta exam participants will receive their exam scores as of September 27, 2012. The successful candidates will receive their certificates starting mid-October, 2012. Oracle Fusion Human Capital Management 11g Talent Management Essentials Exam (1Z0-585) – now in Production! All Beta exam participants will receive their exam scores as of September 27, 2012. The successful candidates will receive their certificates starting mid-October, 2012. Contact UsPlease direct any inquiries you may have to Oracle Partner Enablement team at [email protected].

    Read the article

  • Getting Requirements Right

    - by Tim Murphy
    Originally posted on: http://geekswithblogs.net/tmurphy/archive/2013/10/28/getting-requirements-right.aspxI had a meeting with a stakeholder who stated “I bet you wish I wasn’t in these meetings”.  She said this because she kept changing what we thought the end product should look like.  My reply was that it would be much worse if she came in at the end of the project and told us we had just built the wrong solution. You have to take the time to get the requirements right.  Be honest with all involved parties as to the amount of time it is taking to refine the requirements.  The only thing worse than wrong requirements is a surprise in budget overages.  If you give open visibility to your progress then management has the ability to shift priorities if needed. In order to capture the best requirements use different approaches to help your stakeholders to articulate their needs.  Use mock ups and matrix spread sheets to allow them to visualize and confirm that everyone has the same understanding.  The goals isn’t to record every last detail, but to have the major landmarks identified so there are fewer surprises along the way. Help the team members to understand that you all have the same goal.  You want to create the best possible solution for the given business problem.  If you do this everyone involved will do there best to outline a picture of what is to be built and you will be able to design an appropriate solution to fill those needs more easily. Technorati Tags: requirements gathering,PSC Group,PSC

    Read the article

  • CI - How long is continous?

    - by Andy
    We currently are using CCNet as our continous integration server. Most projects check for changes every 30 seconds (the default) and if needed perform a build (unit tests, stylecop, fxcop, etc). We've gotten quite a few projects now, and the server spends most of its time near 100% cpu utilization. This has alarmed some of the development team, even though the server is responsive and builds are still about the same length of time they've always been. Its been suggested that we lower the check interval to about five minutes. To me that seems too long, and we risk people committing code and then going home for the weekend and now there's a broken build possibly holding up others. In response, the suggestion is that if someone needs to know the results they can force the build. But that seems to defeat the purpose of CI, as I thought it was supposed to be automated. My proposed solution is just to get another build server and split the builds amongst the servers. Am I thinking about this the wrong way, or is there a point where if integration isn't often enough you're not really doing CI anymore?

    Read the article

  • Vote for bugs which impact you!

    - by Sveta Smirnova
    Matt Lord already announced this change, but I am so happy, so want to repeat. MySQL Community Bugs Database Team introduced new button "Affects Me". After you click this button, counter, assigned to each of bug reports, will increase by one. This means we: MySQL Support and Engineering, - will see how many users are affected by the bug. Why is this important? We have always considered community input as we prioritize bug fixes, and this is one more point of reference for us. Before this change we only had a counter for support customers which increased when they opened a support request, complaining they are affected by a bug. But our customers are smart and not always open support request when hit a bug: sometimes they simply implement workaround. Or there could be other circumstances when they don't create a ticket. Or this could be just released version, which big shops frighten to use in production. Therefore, sometimes, when discussing which bug to prioritize and which not we can not rely only on "Affects paying customers" number, rather need to make guess if one or another bug can affect large group of our users. We used number of bug report subscribers, most recent comments, searched forums, but all these methods gave only approximation. Therefore I want to ask you. If you hit a bug which already was reported, but not fixed yet, please click "Affects Me" button! It will take just a few seconds, but your voice will be heard.

    Read the article

  • What are the signs that a ten days debugging session will not resolve an issue? [on hold]

    - by smonff
    Ten days ago, we fixed a bug on a large application and the hot fix has created a disappearing of some data from the user point of view (side effect). Data are not deleted, but have been set to hidden status. It could be possible to get the data back, but the thing seems to be hard: we've already spent 10 days to understand and reproduce the problem (mostly with SQL queries but sometimes it is necessary to update the database to test the application logic). My questions are : is 10 days a normal amount of time for these kind of problems? should we keep on and retrieve the data or should we give up this work (so the customer-relationship person will tell these users sorry for the loss, but your data have disappeared or maybe tell nothing at all)? what can be the signs that shows that we should stop to search for how to solve this issue? Edit about the context : we are a small team(3), users are not the customers, and lost data are not about the users money, bank or vital data. This is a question from a confused developer about development methodologies and business concerns, not about how we should deal with the customers.

    Read the article

  • Release Notes for 5/18/2012

    Here are the notes for this week’s release: Pull Requests We’ve added the ability to see the snippets of code where a user commented inline in the discussion of pull requests. You can also add another line comment directly from the discussion area, rather than navigating to the code diff viewer. Note that there’s currently a known issue where the line associated with the comment isn’t being properly differentiated for existing pull requests (the line in the middle of each diff preview should be bolded). Apologies for the inconvenience! As part of this work, we also took some time to clean up our diff viewer UI to remove the dots and introduce a new color scheme where green is used for added lines. Bug Fixes Fixed an issue affecting the ability to assign pull requests. Fixed an issue where managing various team resources for a project was not working in Chrome or Firefox. Fixed an issue where a project’s RSS subscribe dialog popped up in the wrong place. Fixed an issue where editing wiki anchor links would insert extra characters, resulting in broken links. Fixed an issue where project logos did not display correctly when browsing the site with https in Chrome or Firefox. Fixed an issue where users could encounter errors when deleting remote Git branches. Fixed an issue affecting the ability of fork collaborators to push changes to the fork. Fixed an issue where the advanced work item filters would not persist when navigating through result pages. Fixed an issue where the issue tracker notifications link was not clickable in Chrome. Fixed an issue where pull request comments with line breaks would not be formatted properly when viewing the pull request. Other We upgraded our Git servers to version 1.7.10.1. Have ideas on how to improve CodePlex? Visit our ideas page! Vote for your favorite ideas or submit a new one. Got Twitter? Follow us and keep apprised of the latest releases and service status at @codeplex.

    Read the article

  • SQL Server 2012 Express LocalDB &ndash; How to get started

    - by krislankford
    As many of you aware, SQL Server can be a bit of a pig when it comes to system resources on your development machine. As part of the 2012 products Microsoft has added SQL Server 2012 Express LocalDB which is a happy medium for myself when thinking about having to install a full blown SQL Server on my box. This however does not work in all cases for all development but if you are doing web or local client development then it should suffice. On the other hand, if you are working with technologies like SharePoint or trying to run Team Foundation Server on your local box then you will be out of luck while using LocalDB. To start of with, the localDB setup is delivered and packaged with Visual Studio 2012 RC. If you want to get the stand-alone installer you can download it here in either the 32 or 64 bit flavors. Once you get it installed you can start using it right away in either Visual Studio 2010 or the new Visual Studio 2012 RC. To get started you can open the SQL Server object explorer in Visual Studio by clicking   the menu option View –> SQL Server Object Explorer. This will bring up to the navigation pane where you can add a SQL Server. Once you add the SQL Server you will be prompted with the “Connect to Server” dialog to enter the server for which you can use “(localdb)\v11.0”. Click connect and you should be connected to your localDB where you can create and manage databases from Visual Studio 2010, Visual Studio 2012 or SSMS. Once you have started creating databases here you can use the database projects in Visual Studio with these database as well as use the (localdb)\v11.0 server name inside your connections string information for your development environment. Hope this helps someone get started with SQL Server 2012 Express LocalDB! It provides a great balance for developing against SQL Server 2012.

    Read the article

  • Is it possible/likely to be paid fairly without a college degree? [closed]

    - by user20134
    Some back story, and then my question: I took a "break" from getting a university education last year to work full time as back end developer on a GIS application at $10.50 an hour. Later that year I was hired on by a fairly prestigious organization on their GIS application for a meager salary + rockin' benefits (not that I need them). I agreed to work on this project through Summer 2012. I don't feel like I'm being fairly compensated for my time. Other team members make between 3-5 times as much as I do, and their work isn't 3-5 times as good as mine, nor do they have 3-5 times as much output. I don't think this is a rectifiable situation within this institution. They've got a set of personnel charts and the way it gets computed, I make less money than any of the janitors (who are very good, and very nice people to boot, and I'm glad they get paid so well. I wish everyone got livable wages). I'm pretty bright, but school's a drag. I don't want mega bucks, I just want $40k/yr (localized to the southeast united states) so I can save enough money to travel, or maybe "finish [my] education". My question is this: Are people without degrees ever compensated commensurate with other people who have degrees? As a someone who never "finished their education", how badly do you think this as hurt you? How do you navigate the job seeking and hiring process? As someone who hires programmers, do you pay more for diplomas? Is that an institutional necessity, or based on your own value judgement?

    Read the article

  • Secondment promotion promises

    - by user75460
    I'm a Java developer at a large FTSE 30 company. My line manager approached me and asked if I'd like to be the teams lead developer. I was keen to accept. Initially he said I'd be acting-up for 3 months, then changed his tune and said I would be doing a 6 month secondment. During this time, he has got himself promoted and I have a new line manager. I have been very successful during this secondment and reviews have been overwhelmingly positive: both from my former line manager and current line manager. However, six months on, no lead role has been created in the organization and a new director has re-organised the structure of the team: two senior roles (senior Android and senior iOS) are going to be created. I feel a bit put-out that my secondment has amounted to nothing. I could have just done nothing and then applied for the senior role 6 months later (which I feel aren't as marketable as Lead developer). During my secondment I have basically become TA, senior developer, line manager and general go-to guy for all things (across Android and iOS). What do you think I should do, and has my company abused it's position? I feel they have offered a secondment to a role that they never really planned to create. During this time I have received no financial benefit for doing a more senior role.

    Read the article

  • Partner Showcase

    - by rituchhibber
    Building a High Performance Employee Self Service Portal with Oracle WebCenter Free Half Day Technical Workshop Organisations started with static corporate intranets at the beginning of the “Noughties”, these have been evolving to the Intranet Portal that is common today. The rise in Employee Self Service leverages off this evolution to transform the intranet as a resource in order to deliver the “Contextual workers control panel”. This empowers employees to do their complete job from a single environment covering transactions, document handling, form completion, watching presentations, participating in discussions through to utilising search functionality. Ether Solutions - the Enterprise Portal specialists, together with C2B2 - the independent middleware experts, will deliver this workshop to you, allowing you to discover how Oracle WebCenter provides a high performance, highly scalable platform for social intranets and EmployeeSelf Service Portals. To register, please click here. When? Wednesday, 12th of December 2012 Where? Institute of Directors, 116 Pall Mall, London SW1Y 5ED Who should attend? Lead Developers, Technical Architects, Solution Architects, Technical Leads and other Technical team member interested in learning about WebCenter. Lingotek - Collaborative Translation Technology Lingotek is the leading provider of Collaborative Translation Technology designed to meet the requirements of organizations challenged with communicating, interacting, and commercializing a global audience. Lingotek software helps companies achieve unprecedented control over the translation process and enables companies to capture, grow, and reuse their linguistic assets. Lingotek has deployed systems for some of the most innovative organizations in the United States and has enabled the success of large Fortune 500 corporations, small professional firms, and companies of every size in between. For further information, please click here.

    Read the article

  • Release Notes for 6/21/2012

    Here are the notes for this week’s release on CodePlex: Pull Requests We now support the ability to conduct pull requests in Git and Mercurial across arbitrary branches in your project. No forks necessary! If you’re on a small team of contributors, this is a great way to conduct code reviews for changes to your project. We now support e-mail notifications to be delivered whenever a comment is added to a pull request or line of code pertaining to a pull request. A checkbox for subscribing now appears at the bottom of all pull requests. You can manage your subscriptions by editing your profile. Bug Fixes Updated the various change subscription details page to reflect our newer UI theme. Changed the placement of horizontal scrollbar when viewing diffs of pull requests and commits to be inline with the code. Fixed various issues around interacting with the new diff viewer that we introduced last week. Do let us know if you have any feedback on the new diff viewer. Have ideas on how to improve CodePlex? Visit our ideas page! Vote for your favorite ideas or submit a new one. Got Twitter? Follow us and keep apprised of the latest releases and service status at @codeplex.

    Read the article

  • Working in Germany (Munich) [closed]

    - by adri
    My husband and I are relocating to Munich in August (He's German). I have been looking for jobs online since I heard that software developers are in great demand. And by the looks of it, seems to be correct. There are lots of offers on line but I have a problem, my german is not spectacular. I would rate it as basic. Good to be a tourist but not good to write anything formal, not even my cover letter. So I was wondering how hard is really going to be to find a job for me with virtually no german (I'm studying, but good german is not going to happen over night). Also, would it be possible to secure a job before arriving to Germany? I have nearly 10 years of working experience developing software, mostly in .net (c#, vb, winForms, asp.net) but I also know java. 3 years experience as a team leader for groups of up to 8 people. In the last couple of years I've been working mostly on digitalization of documents (such as birth certificates and the like) but I'm more than willing to try other fields Also I speak English, Spanish, Italian, a bit of portuguese and of course a bit of german.

    Read the article

  • Android AsyncTask testing problem with Android Test Framework

    - by Vlad
    I have a very simple AsyncTask implementation example and have problem to test it using Android JUnit framework. It works just fine when I instantiate and execute it in normal application. However when it's executed from any of Android Testing framework classes (i.e. AndroidTestCase, ActivityUnitTestCase, ActivityInstrumentationTestCase2 etc) it behaves sarngely: - It executes doInBackground() method correctly - However it doesn't invokes any of its notification methods (onPostExecute(), onProgressUpdate(), etc) -- just silently ignores them whitout showing any errors. This is very simple AsyncTask example package kroz.andcookbook.threads.asynctask; import android.os.AsyncTask; import android.util.Log; import android.widget.ProgressBar; import android.widget.Toast; public class AsyncTaskDemo extends AsyncTask<Integer, Integer, String> { AsyncTaskDemoActivity _parentActivity; int _counter; int _maxCount; public AsyncTaskDemo(AsyncTaskDemoActivity asyncTaskDemoActivity) { _parentActivity = asyncTaskDemoActivity; } @Override protected void onPreExecute() { super.onPreExecute(); _parentActivity._progressBar.setVisibility(ProgressBar.VISIBLE); _parentActivity._progressBar.invalidate(); } @Override protected String doInBackground(Integer... params) { _maxCount = params[0]; for (_counter = 0; _counter <= _maxCount; _counter++) { try { Thread.sleep(1000); publishProgress(_counter); } catch (InterruptedException e) { // Ignore } } } @Override protected void onProgressUpdate(Integer... values) { super.onProgressUpdate(values); int progress = values[0]; String progressStr = "Counting " + progress + " out of " + _maxCount; _parentActivity._textView.setText(progressStr); _parentActivity._textView.invalidate(); } @Override protected void onPostExecute(String result) { super.onPostExecute(result); _parentActivity._progressBar.setVisibility(ProgressBar.INVISIBLE); _parentActivity._progressBar.invalidate(); } @Override protected void onCancelled() { super.onCancelled(); _parentActivity._textView.setText("Request to cancel AsyncTask"); } } This is a test case. Here AsyncTaskDemoActivity is a very simple Activity providing UI for testing AsyncTask in mode: package kroz.andcookbook.test.threads.asynctask; import java.util.concurrent.ExecutionException; import kroz.andcookbook.R; import kroz.andcookbook.threads.asynctask.AsyncTaskDemo; import kroz.andcookbook.threads.asynctask.AsyncTaskDemoActivity; import android.content.Intent; import android.test.ActivityUnitTestCase; import android.widget.Button; public class AsyncTaskDemoTest2 extends ActivityUnitTestCase<AsyncTaskDemoActivity> { AsyncTaskDemo _atask; private Intent _startIntent; public AsyncTaskDemoTest2() { super(AsyncTaskDemoActivity.class); } protected void setUp() throws Exception { super.setUp(); _startIntent = new Intent(Intent.ACTION_MAIN); } protected void tearDown() throws Exception { super.tearDown(); } public final void testExecute() { startActivity(_startIntent, null, null); Button btnStart = (Button) getActivity().findViewById(R.id.Button01); btnStart.performClick(); assertNotNull(getActivity()); } } All this code is working just fine, except the fact that AsynTask doesn't invoke it's notification methods when executed by whithin Android Testing Framework. Any ideas?

    Read the article

  • Best practices for using the Entity Framework with WPF DataBinding

    - by Ken Smith
    I'm in the process of building my first real WPF application (i.e., the first intended to be used by someone besides me), and I'm still wrapping my head around the best way to do things in WPF. It's a fairly simple data access application using the still-fairly-new Entity Framework, but I haven't been able to find a lot of guidance online for the best way to use these two technologies (WPF and EF) together. So I thought I'd toss out how I'm approaching it, and see if anyone has any better suggestions. I'm using the Entity Framework with SQL Server 2008. The EF strikes me as both much more complicated than it needs to be, and not yet mature, but Linq-to-SQL is apparently dead, so I might as well use the technology that MS seems to be focusing on. This is a simple application, so I haven't (yet) seen fit to build a separate data layer around it. When I want to get at data, I use fairly simple Linq-to-Entity queries, usually straight from my code-behind, e.g.: var families = from family in entities.Family.Include("Person") orderby family.PrimaryLastName, family.Tag select family; Linq-to-Entity queries return an IOrderedQueryable result, which doesn't automatically reflect changes in the underlying data, e.g., if I add a new record via code to the entity data model, the existence of this new record is not automatically reflected in the various controls referencing the Linq query. Consequently, I'm throwing the results of these queries into an ObservableCollection, to capture underlying data changes: familyOC = new ObservableCollection<Family>(families.ToList()); I then map the ObservableCollection to a CollectionViewSource, so that I can get filtering, sorting, etc., without having to return to the database. familyCVS.Source = familyOC; familyCVS.View.Filter = new Predicate<object>(ApplyFamilyFilter); familyCVS.View.SortDescriptions.Add(new System.ComponentModel.SortDescription("PrimaryLastName", System.ComponentModel.ListSortDirection.Ascending)); familyCVS.View.SortDescriptions.Add(new System.ComponentModel.SortDescription("Tag", System.ComponentModel.ListSortDirection.Ascending)); I then bind the various controls and what-not to that CollectionViewSource: <ListBox DockPanel.Dock="Bottom" Margin="5,5,5,5" Name="familyList" ItemsSource="{Binding Source={StaticResource familyCVS}, Path=., Mode=TwoWay}" IsSynchronizedWithCurrentItem="True" ItemTemplate="{StaticResource familyTemplate}" SelectionChanged="familyList_SelectionChanged" /> When I need to add or delete records/objects, I manually do so from both the entity data model, and the ObservableCollection: private void DeletePerson(Person person) { entities.DeleteObject(person); entities.SaveChanges(); personOC.Remove(person); } I'm generally using StackPanel and DockPanel controls to position elements. Sometimes I'll use a Grid, but it seems hard to maintain: if you want to add a new row to the top of your grid, you have to touch every control directly hosted by the grid to tell it to use a new line. Uggh. (Microsoft has never really seemed to get the DRY concept.) I almost never use the VS WPF designer to add, modify or position controls. The WPF designer that comes with VS is sort of vaguely helpful to see what your form is going to look like, but even then, well, not really, especially if you're using data templates that aren't binding to data that's available at design time. If I need to edit my XAML, I take it like a man and do it manually. Most of my real code is in C# rather than XAML. As I've mentioned elsewhere, entirely aside from the fact that I'm not yet used to "thinking" in it, XAML strikes me as a clunky, ugly language, that also happens to come with poor designer and intellisense support, and that can't be debugged. Uggh. Consequently, whenever I can see clearly how to do something in C# code-behind that I can't easily see how to do in XAML, I do it in C#, with no apologies. There's been plenty written about how it's a good practice to almost never use code-behind in WPF page (say, for event-handling), but so far at least, that makes no sense to me whatsoever. Why should I do something in an ugly, clunky language with god-awful syntax, an astonishingly bad editor, and virtually no type safety, when I can use a nice, clean language like C# that has a world-class editor, near-perfect intellisense, and unparalleled type safety? So that's where I'm at. Any suggestions? Am I missing any big parts of this? Anything that I should really think about doing differently?

    Read the article

  • Oracle performance problem

    - by jreid42
    We are using an Oracle 11G machine that is very powerful; has redundant storage etc. It's a beast from what I have been told. We just got this DB for a tool that when I first came on as a coop had like 20 people using, now its upwards of 150 people. I am the only one working on it :( We currently have a system in place that distributes PERL scripts across our entire data center essentially giving us a sort of "grid" computing power. The Perl scripts run a sort of simulation and report back the results to the database. They do selects / inserts. The load is not very high for each script but it could be happening across 20-50 systems at the same time. We then have multiple data centers and users all hitting the same database with this same approach. Our main problem with this is that our database is getting overloaded with connections and having to drop some. We sometimes have upwards of 500 connections. These are old perl scripts and they do not handle this well. Essentially they fail and the results are lost. I would rather avoid having to rewrite a lot of these as they are poorly written, and are a headache to even look at. The database itself is not overloaded, just the connection overhead is too high. We open a connection, make a quick query and then drop the connection. Very short connections but many of them. The database team has basically said we need to lower the number of connections or they are going to ignore us. Because this is distributed across our farm we cant implement persistent connections. I do this with our webserver; but its on a fixed system. The other ones are perl scripts that get opened and closed by the distribution tool and thus arent always running. What would be my best approach to resolving this issue? The scripts themselves can wait for a connection to be open. They do not need to act immediately. Some sort of queing system? I've been suggested to set up a few instances of a tool called "SQL Relay". Maybe one in each data center. How reliable is this tool? How good is this approach? Would it work for what we need? We could have one for each data center and relay requests through it to our main database, keeping a pipeline of open persistent connections? Does this make sense? Is there any other suggestions you can make? Any ideas? Any help would be greatly appreciated. Sadly I am just a coop student working for a very big company and somehow all of this has landed all on my shoulders (there is literally nobody to ask for help; its a hardware company, everybody is hardware engineers, and the database team is useless and in India) and I am quite lost as what the best approach would be? I am extremely overworked and this problem is interfering with on going progress and basically needs to be resolved as quickly as possible; preferably without rewriting the whole system, purchasing hardware (not gonna happen), or shooting myself in the foot. HELP LOL!

    Read the article

  • Passing integer lists in a sql query, best practices

    - by Artiom Chilaru
    I'm currently looking at ways to pass lists of integers in a SQL query, and try to decide which of them is best in which situation, what are the benefots of each, and what are the pitfalls, what should be avoided :) Right now I know of 3 ways that we currently use in our application. 1) Table valued parameter: Create a new Table Valued Parameter in sql server: CREATE TYPE [dbo].[TVP_INT] AS TABLE( [ID] [int] NOT NULL ) Then run the query against it: using (var conn = new SqlConnection(DataContext.GetDefaultConnectionString)) { var comm = conn.CreateCommand(); comm.CommandType = CommandType.Text; comm.CommandText = @" UPDATE DA SET [tsLastImportAttempt] = CURRENT_TIMESTAMP FROM [Account] DA JOIN @values IDs ON DA.ID = IDs.ID"; comm.Parameters.Add(new SqlParameter("values", downloadResults.Select(d => d.ID).ToDataTable()) { TypeName = "TVP_INT" }); conn.Open(); comm.ExecuteScalar(); } The major disadvantages of this method is the fact that Linq doesn't support table valued params (if you create an SP with a TVP param, linq won't be able to run it) :( 2) Convert the list to Binary and use it in Linq! This is a bit better.. Create an SP, and you can run it within linq :) To do this, the SP will have an IMAGE parameter, and we'll be using a user defined function (udf) to convert this to a table.. We currently have implementations of this function written in C++ and in assembly, both have pretty much the same performance :) Basically, each integer is represented by 4 bytes, and passed to the SP. In .NET we have an extension method that convers an IEnumerable to a byte array The extension method: public static Byte[] ToBinary(this IEnumerable intList) { return ToBinaryEnum(intList).ToArray(); } private static IEnumerable<Byte> ToBinaryEnum(IEnumerable<Int32> intList) { IEnumerator<Int32> marker = intList.GetEnumerator(); while (marker.MoveNext()) { Byte[] result = BitConverter.GetBytes(marker.Current); Array.Reverse(result); foreach (byte b in result) yield return b; } } The SP: CREATE PROCEDURE [Accounts-UpdateImportAttempts] @values IMAGE AS BEGIN UPDATE DA SET [tsLastImportAttempt] = CURRENT_TIMESTAMP FROM [Account] DA JOIN dbo.udfIntegerArray(@values, 4) IDs ON DA.ID = IDs.Value4 END And we can use it by running the SP directly, or in any linq query we need using (var db = new DataContext()) { db.Accounts_UpdateImportAttempts(downloadResults.Select(d => d.ID).ToBinary()); // or var accounts = db.Accounts .Where(a => db.udfIntegerArray(downloadResults.Select(d => d.ID).ToBinary(), 4) .Select(i => i.Value4) .Contains(a.ID)); } This method has the benefit of using compiled queries in linq (which will have the same sql definition, and query plan, so will also be cached), and can be used in SPs as well. Both these methods are theoretically unlimited, so you can pass millions of ints at a time :) 3) The simple linq .Contains() It's a more simple approach, and is perfect in simple scenarios. But is of course limited by this. using (var db = new DataContext()) { var accounts = db.Accounts .Where(a => downloadResults.Select(d => d.ID).Contains(a.ID)); } The biggest drawback of this method is that each integer in the downloadResults variable will be passed as a separate int.. In this case, the query is limited by sql (max allowed parameters in a sql query, which is a couple of thousand, if I remember right). So I'd like to ask.. What do you think is the best of these, and what other methods and approaches have I missed?

    Read the article

  • Parallax backgrounds in OpenGL ES on the iPhone

    - by Scott
    I've got basically a 2d game on the iPhone and I'm trying to set up multiple backgrounds that scroll at different speeds (known as parallax backgrounds). So my thought was to just stick the backgrounds BEHIND the foreground using different z-coordinate planes, and just make them bigger than the foreground (in size) to accommodate, so that the whole thing can be scrolled (just at a different speed). And (as far as I know) I basically implemented that. The only problem is that it seems to entirely ignore whatever z-value I give it, or rather it just zeroes all of them. I see the background (I've only tested ONE background so far, to keep it simple...so for now I just have a foreground and I want one background scrolling at a different speed), but it scrolls 1:1 with my foreground, so it obviously doesn't look right, and most of it is cut off (cause it's bigger). And I've tried various z-values for the background and various near/far clipping planes...it's always the same. I'm probably just doing one simple thing wrong, but I can't figure it out. I'm wondering if it has to do with me using only 2 coordinates in glVertexPointer for the foreground? (Of course for the background I AM passing in 3) I'll post some code: This is some initial setup: glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrthof(-1.0f, 1.0f, -1.5f, 1.5f, -10.0f, 10.0f); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glEnableClientState(GL_VERTEX_ARRAY); //glEnableClientState(GL_COLOR_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); //transparency glEnable (GL_BLEND); glBlendFunc (GL_ONE, GL_ONE_MINUS_SRC_ALPHA); A little bit about my foreground's float array....it's interleaved. For my foreground it goes vertex x, vertex y, texture x, texture y, repeat. This all works just fine. This is my FOREGROUND rendering: glVertexPointer(2, GL_FLOAT, 4*sizeof(GLfloat), texes); <br> glTexCoordPointer(2, GL_FLOAT, 4*sizeof(GLfloat), (GLvoid*)texes + 2*sizeof(GLfloat)); <br> glDrawArrays(GL_TRIANGLES, 0, indexCount / 4); BACKGROUND rendering: Same drill here except this time it goes vertex x, vertex y, vertex z, texture x, texture y, repeat. Note the z value this time. I did make sure the data in this array was correct while debugging (getting the right z values). And again, it shows up...it's just not going far back in the distance like it should. glVertexPointer(3, GL_FLOAT, 5*sizeof(GLfloat), b1Texes); glTexCoordPointer(2, GL_FLOAT, 5*sizeof(GLfloat), (GLvoid*)b1Texes + 3*sizeof(GLfloat)); glDrawArrays(GL_TRIANGLES, 0, b1IndexCount / 5); And to move my camera, I just do a simple glTranslatef(x, y, 0.0f); I'm not understanding what I'm doing wrong cause this seems like the most basic 3D function imaginable...things further away are smaller and don't move as fast when the camera moves. Not the case for me. Seems like it should be pretty basic and not even really be affected by my projection and all that (though I've even tried doing glFrustum just for fun, no success). Please help, I feel like it's just one dumb thing. I will post more code if necessary.

    Read the article

  • How can I reset addAttributeToFilter in Magento searches

    - by Bobby
    I'm having problems getting the addAttributeToFilter function within a loop to behave in Magento. I have test data in my store to support searches for all of the following data; $attributeSelections=array( array('size' => 44, 'color' => 67, 'manufacturer' => 17), array('size' => 43, 'color' => 69, 'manufacturer' => 17), array('size' => 42, 'color' => 70, 'manufacturer' => 17)); And my code to search through these combinations; foreach ($attributeSelections as $selection) { $searcher = Mage::getSingleton('catalogsearch/advanced')->getProductCollection(); foreach ($selection as $k => $v) { $searcher->addAttributeToFilter("$k", array('eq' => "$v")); echo "$k: $v<br />"; } $result=$searcher->getData(); print_r($result); } This loop gives the following results (slightly sanitised for veiwing pleasure); size: 44 color: 67 manufacturer: 17 Array ( [0] => Array ( [entity_id] => 2965 [entity_type_id] => 4 [attribute_set_id] => 28 [type_id] => simple [sku] => 1006-0001 [size] => 44 [color] => 67 [manufacturer] => 17 ) ) size: 43 color: 69 manufacturer: 17 Array ( [0] => Array ( [entity_id] => 2965 [entity_type_id] => 4 [attribute_set_id] => 28 [type_id] => simple [sku] => 1006-0001 [size] => 44 [color] => 67 [manufacturer] => 17 ) ) size: 42 color: 70 manufacturer: 17 Array ( [0] => Array ( [entity_id] => 2965 [entity_type_id] => 4 [attribute_set_id] => 28 [type_id] => simple [sku] => 1006-0001 [size] => 44 [color] => 67 [manufacturer] => 17 ) ) So my loop is function and generating the search. However, the values fed into addAttributeToFilter on the first itteration of the loop seem to remain stored for each search. I've tried clearing my search object, for example, unset($searcher) and unset($result). I've also tried magento functions such as getNewEmptyItem(), resetData(), distinct() and clear() but none have the desired effect. Basically what I am trying to do is check for duplicate products before my script attempts to programatically create a product with these attribute combinations. The array of attribute selections may be of varying sizes hence the need for a loop. I would be very appreiciative if anyone might be able to shed some light on my problem.

    Read the article

  • Is the Unix Philosophy still relevant in the Web 2.0 world?

    - by David Titarenco
    Introduction Hello, let me give you some background before I begin. I started programming when I was 5 or 6 on my dad's PSION II (some primitive BASIC-like language), then I learned more and more, eventually inching my way up to C, C++, Java, PHP, JS, etc. I think I'm a pretty decent coder. I think most people would agree. I'm not a complete social recluse, but I do stuff like write a virtual machine for fun. I've never taken a computer course in college because I've been in and out for the past couple of years and have only been taking core classes; never having been particularly amazing at school, perhaps I'm missing some basic tenet that most learn in CS101. I'm currently reading Coders at Work and this question is based on some ideas I read in there. A Brief (Fictionalized) Example So a certain sunny day I get an idea. I hire a designer and hammer away at some C/C++ code for a couple of months, soon thereafter releasing silvr.com, a website that transmutes lead into silver. Yep, I started my very own start-up and even gave it a clever web 2.0 name with a vowel missing. Mom and dad are proud. I come up with some numbers I should be seeing after 1, 2, 3, 6, 9, 12 months and set sail. Obviously, my transmuting server isn't perfect, sometimes it segfaults, sometimes it leaks memory. I fix it and keep truckin'. After all, gdb is my best friend. Eventually, I'm at a position where a very small community of people are happily transmuting lead into silver on a semi-regular basis, but they want to let their friends on MySpace know how many grams of lead they transmuted today. And they want to post images of their lead and silver nuggets on flickr. I'm losing out on potential traffic unless I let them log in with their Yahoo, Google, and Facebook accounts. They want webcam support and live cock fighting, merry-go-rounds and Jabberwockies. All these things seem necessary. The Aftermath Of course, I have to re-write the transmuting server! After all, I've been losing money all these months. I need OAuth libraries and OpenID libraries, JSON support, and the only stable Jabberwocky API is for Java. C++ isn't even an option anymore. I'm just one guy! The Java binary just grows and grows since I need some legacy Apache include for the JSON library, and some antiquated Sun dependency for OAuth support. Then I pick up a book like Coders at Work and read what people like jwz say about complexity... I think to myself.. Keep it simple, stupid. I like simple things. I've always loved the Unix Philosophy but even after trying to keep the new server source modular and sleek, I loathe having to write one more line of code. It feels that I'm just piling crap on top of other crap. Maybe I'm naive thinking every piece of software can be simple and clever. Maybe it's just a phase.. or is the Unix Philosophy basically dead when it comes to the current state of (web) development? I'm just kind of disheartened :(

    Read the article

  • Select number of rows for each group where two column values makes one group

    - by Fábio Antunes
    I have a two select statements joined by UNION ALL. In the first statement a where clause gathers only rows that have been shown previously to the user. The second statement gathers all rows that haven't been shown to the user, therefore I end up with the viewed results first and non-viewed results after. Of course this could simply be achieved with the same select statement using a simple ORDER BY, however the reason for two separate selects is simple after you realize what I hope to accomplish. Consider the following structure and data. +----+------+-----+--------+------+ | id | from | to | viewed | data | +----+------+-----+--------+------+ | 1 | 1 | 10 | true | .... | | 2 | 10 | 1 | true | .... | | 3 | 1 | 10 | true | .... | | 4 | 6 | 8 | true | .... | | 5 | 1 | 10 | true | .... | | 6 | 10 | 1 | true | .... | | 7 | 8 | 6 | true | .... | | 8 | 10 | 1 | true | .... | | 9 | 6 | 8 | true | .... | | 10 | 2 | 3 | true | .... | | 11 | 1 | 10 | true | .... | | 12 | 8 | 6 | true | .... | | 13 | 10 | 1 | false | .... | | 14 | 1 | 10 | false | .... | | 15 | 6 | 8 | false | .... | | 16 | 10 | 1 | false | .... | | 17 | 8 | 6 | false | .... | | 18 | 3 | 2 | false | .... | +----+------+-----+--------+------+ Basically I wish all non viewed rows to be selected by the statement, that is accomplished by checking weather the viewed column is true or false, pretty simple and straightforward, nothing to worry here. However when it comes to the rows already viewed, meaning the column viewed is TRUE, for those records I only want 3 rows to be returned for each group. The appropriate result in this instance should be the 3 most recent rows of each group. +----+------+-----+--------+------+ | id | from | to | viewed | data | +----+------+-----+--------+------+ | 6 | 10 | 1 | true | .... | | 7 | 8 | 6 | true | .... | | 8 | 10 | 1 | true | .... | | 9 | 6 | 8 | true | .... | | 10 | 2 | 3 | true | .... | | 11 | 1 | 10 | true | .... | | 12 | 8 | 6 | true | .... | +----+------+-----+--------+------+ As you see from the ideal result set we have three groups. Therefore the desired query for the viewed results should show a maximum of 3 rows for each grouping it finds. In this case these groupings were 10 with 1 and 8 with 6, both which had three rows to be shown, while the other group 2 with 3 only had one row to be shown. Please note that where from = x and to = y, makes the same grouping as if it was from = y and to = x. Therefore considering the first grouping (10 with 1), from = 10 and to = 1 is the same group if it was from = 1 and to = 10. However there are plenty of groups in the whole table that I only wish the 3 most recent of each to be returned in the select statement, and thats my problem, I not sure how that can be accomplished in the most efficient way possible considering the table will have hundreds if not thousands of records at some point. Thanks for your help. Note: The columns id, from, to and viewed are indexed, that should help with performance. PS: I'm unsure on how to name this question exactly, if you have a better idea, be my guest and edit the title.

    Read the article

  • Getting an invalidoperationexception when deserialising XML

    - by Paul Johnson
    Hi, I'm writing a simple proof of concept application to load up an XML file and depending on the very simple code, create a window and put something into it (it's for a much larger project). Due to limitations in Mono, I'm having to run in this way. The code I currently have looks like this using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Windows.Forms; using System.IO; using System.Collections; using System.Xml; using System.Xml.Serialization; namespace form_from_xml { public class xmlhandler : Form { public void loaddesign() { FormData f; f = null; try { string path_env = Path.GetDirectoryName(Application.ExecutablePath) + Path.DirectorySeparatorChar; // code dies on the line below XmlSerializer s = new XmlSerializer(typeof(FormData)); TextReader r = new StreamReader(path_env + "designer-test.xml"); f = (FormData)s.Deserialize(r); r.Close(); } catch (System.IO.FileNotFoundException) { MessageBox.Show("Unable to find the form file", "File not found", MessageBoxButtons.OK); } } } [XmlRoot("Forms")] public class FormData { private ArrayList formData; public FormData() { formData = new ArrayList(); } [XmlElement("Element")] public Elements[] elements { get { Elements[] elements = new Elements[formData.Count]; formData.CopyTo(elements); return elements; } set { if (value == null) return; Elements[] elements = (Elements[])value; formData.Clear(); foreach (Elements element in elements) formData.Add(element); } } public int AddItem(Elements element) { return formData.Add(element); } } public class Elements { [XmlAttribute("formname")] public string name; [XmlAttribute("winxsize")] public int winxs; [XmlAttribute("winysize")] public int winys; [XmlAttribute("type")] public object type; [XmlAttribute("xpos")] public int xpos; [XmlAttribute("ypos")] public int ypos; [XmlAttribute("externaldata")] public bool external; [XmlAttribute("externalplace")] public string externalplace; [XmlAttribute("text")] public string text; [XmlAttribute("questions")] public bool questions; [XmlAttribute("questiontype")] public object qtype; [XmlAttribute("numberqs")] public int numberqs; [XmlAttribute("answerfile")] public string ansfile; [XmlAttribute("backlink")] public int backlink; [XmlAttribute("forwardlink")] public int forwardlink; public Elements() { } public Elements(string fn, int wx, int wy, object t, int x, int y, bool ext, string extpl, string te, bool q, object qt, int num, string ans, int back, int end) { name = fn; winxs = wx; winys = wy; type = t; xpos = x; ypos = y; external = ext; externalplace = extpl; text = te; questions = q; qtype = qt; numberqs = num; ansfile = ans; backlink = back; forwardlink = end; } } } With a very simple xmlhandler xml = new xmlhander(); xml.loaddesign(); attached to a winform button. Everything is in the same namespace and the xml file actually exists. This is annoying me now - can anyone spot the error of my ways? Paul

    Read the article

  • My pay pal button will not link to pay pal. It only refreshes page, why?

    - by JPJedi
    I have the following code in my registration page to go to a paypal button. But when I click on the button it just refreshes the page. Is their something I am missing? I should be able to include a paypal button on an aspx page right? <asp:Content ID="Content1" ContentPlaceHolderID="head" Runat="Server"> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="ContentPlaceHolder1" Runat="Server"> <asp:Panel runat="server" ID="pnlRegisterPage" CssClass="registerPage"> <table> <tr> <td><p>Plain text</p></td> <td> <form action="https://www.paypal.com/cgi-bin/webscr" method="post"> <input type="hidden" name="cmd" value="_s-xclick"> <input type="hidden" name="hosted_button_id" value="Z8TACKRHQR722"> <table> <tr><td><input type="hidden" name="on0" value="Registration Type">Registration Type</td></tr><tr><td><select name="os0"> <option value="Team">Team $80.00</option> <option value="Individual">Individual $40.00</option> </select> </td></tr> </table> <input type="hidden" name="currency_code" value="USD"> <input type="image" src="https://www.paypal.com/en_US/i/btn/btn_buynowCC_LG.gif" border="0" name="submit" alt="PayPal - The safer, easier way to pay online!"> <img alt="" border="0" src="https://www.paypal.com/en_US/i/scr/pixel.gif" width="1" height="1"> </form> </td> </tr> </table> </asp:Panel> </asp:Content> Master Page <body> <form id="form1" runat="server"> <asp:ToolkitScriptManager ID="ToolkitScriptManager1" runat="server"> </asp:ToolkitScriptManager> <div class="masterbody"> <center> <asp:Image runat="server" ID="imgLogo" ImageUrl="" /></center> <div class="menubar"> <div class="loginview"> <asp:LoginView ID="MenuBar" runat="server"> <AnonymousTemplate> </AnonymousTemplate> <LoggedInTemplate> </LoggedInTemplate> </asp:LoginView> </div> </div> <div> <asp:ContentPlaceHolder ID="ContentPlaceHolder1" runat="server"> </asp:ContentPlaceHolder> </div> </div> <div class="footer"></div> </form> </body>

    Read the article

  • GLSL Error: failed to preprocess the source. How can I troubleshoot this?

    - by Brent Parker
    I'm trying to learn to play with OpenGL GLSL shaders. I've written a very simple program to simply create a shader and compile it. However, whenever I get to the compile step, I get the error: Error: Preprocessor error Error: failed to preprocess the source. Here's my very simple code: #include <GL/gl.h> #include <GL/glu.h> #include <GL/glut.h> #include <GL/glext.h> #include <time.h> #include <stdio.h> #include <iostream> #include <stdlib.h> using namespace std; const int screenWidth = 640; const int screenHeight = 480; const GLchar* gravity_shader[] = { "#version 140" "uniform float t;" "uniform mat4 MVP;" "in vec4 pos;" "in vec4 vel;" "const vec4 g = vec4(0.0, 0.0, -9.80, 0.0);" "void main() {" " vec4 position = pos;" " position += t*vel + t*t*g;" " gl_Position = MVP * position;" "}" }; double pointX = (double)screenWidth/2.0; double pointY = (double)screenWidth/2.0; void initShader() { GLuint shader = glCreateShader(GL_VERTEX_SHADER); glShaderSource(shader, 1, gravity_shader, NULL); glCompileShader(shader); GLint compiled = true; glGetShaderiv(shader, GL_COMPILE_STATUS, &compiled); if(!compiled) { GLint length; GLchar* log; glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &length); log = (GLchar*)malloc(length); glGetShaderInfoLog(shader, length, &length, log); std::cout << log <<std::endl; free(log); } exit(0); } bool myInit() { initShader(); glClearColor(1.0f, 1.0f, 1.0f, 0.0f); glColor3f(0.0f, 0.0f, 0.0f); glPointSize(1.0); glLineWidth(1.0f); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluOrtho2D(0.0, (GLdouble) screenWidth, 0.0, (GLdouble) screenHeight); glEnable(GL_DEPTH_TEST); return true; } int main(int argc, char** argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB); glutInitWindowSize(screenWidth, screenHeight); glutInitWindowPosition(100, 150); glutCreateWindow("Mouse Interaction Display"); myInit(); glutMainLoop(); return 0; } Where am I going wrong? If it helps, I am trying to do this on a Acer Aspire One with an atom processor and integrated Intel video running the latest Ubuntu. It's not very powerful, but then again, this is a very simple shader. Thanks a lot for taking a look!

    Read the article

  • Why does jquery leak memory so badly?

    - by Thomas Lane
    This is kind of a follow-up to a question I posted last week: http://stackoverflow.com/questions/2429056/simple-jquery-ajax-call-leaks-memory-in-ie I love the jquery syntax and all of its nice features, but I've been having trouble with a page that automatically updates table cells via ajax calls leaking memory. So I created two simple test pages for experimenting. Both pages do an ajax call every .1 seconds. After each successful ajax call, a counter is incremented and the DOM is updated. The script stops after 1000 cycles. One uses jquery for both the ajax call and to update the DOM. The other uses the Yahoo API for the ajax and does a document.getElementById(...).innerHTML to update the DOM. The jquery version leaks memory badly. Running in drip (on XP Home with IE7), it starts at 9MB and finishes at about 48MB, with memory growing linearly the whole time. If I comment out the line that updates the DOM, it still finishes at 32MB, suggesting that even simple DOM updates leak a significant amount of memory. The non-jquery version starts and finishes at about 9MB, regardless of whether it updates the DOM. Does anyone have a good explanation of what is causing jquery to leak so badly? Am I missing something obvious? Is there a circular reference that I'm not aware of? Or does jquery just have some serious memory issues? Here is the source for the leaky (jquery) version: <html> <head> <script type="text/javascript" src="http://www.google.com/jsapi"></script> <script type="text/javascript"> google.load('jquery', '1.4.2'); </script> <script type="text/javascript"> var counter = 0; leakTest(); function leakTest() { $.ajax({ url: '/html/delme.x', type: 'GET', success: incrementCounter }); } function incrementCounter(data) { if (counter<1000) { counter++; $('#counter').text(counter); setTimeout(leakTest,100); } else $('#counter').text('finished.'); } </script> </head> <body> <div>Why is memory usage going up?</div> <div id="counter"></div> </body> </html> And here is the non-leaky version: <html> <head> <script type="text/javascript" src="http://yui.yahooapis.com/2.8.0r4/build/yahoo/yahoo-min.js"></script> <script type="text/javascript" src="http://yui.yahooapis.com/2.8.0r4/build/event/event-min.js"></script> <script type="text/javascript" src="http://yui.yahooapis.com/2.8.0r4/build/connection/connection_core-min.js"></script> <script type="text/javascript"> var counter = 0; leakTest(); function leakTest() { YAHOO.util.Connect.asyncRequest('GET', '/html/delme.x', {success:incrementCounter}); } function incrementCounter(o) { if (counter<1000) { counter++; document.getElementById('counter').innerHTML = counter; setTimeout(leakTest,100); } else document.getElementById('counter').innerHTML = 'finished.' } </script> </head> <body> <div>Memory usage is stable, right?</div> <div id="counter"></div> </body> </html>

    Read the article

< Previous Page | 521 522 523 524 525 526 527 528 529 530 531 532  | Next Page >