Search Results

Search found 6905 results on 277 pages for 'fork join'.

Page 62/277 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • Coherence Webcast for Developers July 11

    - by jeckels
    Coming on July 11th, we look forward to having you join us for a special Coherence webcast - just for developers! Want to learn how you, the developer, can make applications Big Data and Fast data ready? Want to be able to customize and manage your applications and services to provide real-time data and processing with ease? Then this webcast is for you. Coherence Live Webcast Developers: Deploy Highly-Available Custom Services on Your Data Grid Products July 11, 10am Pacific Time >> Register now! <<  (of course, it's free)Join Brian Oliver of the Coherence team to see how you can create and deploy customized, highly-available services for your data grid, and how real-time data processing will allow you to provide unmatched end-user experiences. We look forward to having you join us.

    Read the article

  • What's the best way to manage a multi-user project on github?

    - by Jim
    I'm looking to host a new project on github. This project will be worked on by two coders. One of these coders will also be the project manager who will have overall control over the github repo. I've followed the instructions regarding forking a github project at http://help.github.com/forking/. This all works fine and I'm working on the basis that the main repo is controlled by the lead coder, with the secondary coder working on a fork and submitting pull requests to the lead. A problem arises with this, however, when changes are made to the main branch and not pulled by the secondary coder into their fork. The secondary coder could then make changes to their own fork and submit a pull request to the lead, only for their patches to not match up with the main branch. What's the best way to manage this? I've not committed too much time to git/github, so I'm totally up for checking out other hosted solutions if they're better. Simplicity is the key!

    Read the article

  • match elements from two files, how to write the intended format to a new file

    - by user2489612
    I am trying to update my text file by matching the first column to another updated file's first column, after match it, it will update the old file. Here is my old file: Name Chr Pos ind1 in2 in3 ind4 foot 1 5 aa bb cc ford 3 9 bb cc 00 fake 3 13 dd ee ff fool 1 5 ee ff gg fork 1 3 ff gg ee Here is the new file: Name Chr Pos foot 1 5 fool 2 5 fork 2 6 ford 3 9 fake 3 13 The updated file will be like: Name Chr Pos ind1 in2 in3 ind4 foot 1 5 aa bb cc fool 2 5 ee ff gg fork 2 6 ff gg ee ford 3 9 bb cc 00 fake 3 13 dd ee ff Here is my code: #!/usr/bin/env python import sys inputfile_1 = sys.argv[1] inputfile_2 = sys.argv[2] outputfile = sys.argv[3] inputfile1 = open(inputfile_1, 'r') inputfile2 = open(inputfile_2, 'r') outputfile = open(outputfile, 'w') ind = inputfile1.readlines() cm = inputfile2.readlines()[1:] outputfile.write(ind[0]) #add header for i in ind: i = i.split() for j in cm: j = j.split() if j[0] == i[0]: outputfile.writelines(j[0:3] + i[3:]) inputfile1.close() inputfile2.close() outputfile.close() When I ran it, it returned a single column rather than the format i wanted, any suggestions? Thanks!

    Read the article

  • Empathy auto accept group chat invite

    - by Sivaji
    I'm using empathy 2.34.0 as chat client for account hosted on Google app (server talk.google.com). I'm happy with the features that empathy provides and integration with Google chat, however for group chat when the request is received I need to click on "join" button showed in popup to get started. This makes sense but I would to know if there is any way to automatically join the chat room without clicking the "join" button as I use it only with trusted uses. Besides the messages shared after the invite request and before my entry to chat room is not accessible to me. I looked around the empathy settings but couldn't find anything useful, wondering if I can get some help from here. Thanks.

    Read the article

  • Non-trivial functions that operate on any monad

    - by Strilanc
    I'm looking for examples of interesting methods that take an arbitrary monad and do something useful with it. Monads are extremely general, so methods that operate on monads are widely applicable. On the other hand, methods I know of that can apply to any monad tend to be... really, really trivial. Barely worth extracting into a function. Here's a really boring example: joinTwice. It just flattens an m m m t into an m t: join n = n >>= id joinTwice n = (join . join) n main = print (joinTwice [[[1],[2, 3]], [[4]]]) -- prints [1,2,3,4] The only non-trivial method for monads that I know of is bindFold (see my answer below). Are there more?

    Read the article

  • Public JCP EC Meeting on 10 June

    - by Heather VanCura
    The next JCP EC Meeting is open to the public!  We hope you will join us on Tuesday, 10 June at 08:00 AM PDT.  Agenda includes a discussion on the latest JCP.Next news--JSR 364, Broadening JCP Membership. We hope you will join us, but if you cannot attend, the recording and materials will also be public on the JCP.org multimedia page. Meeting details below. ------------------------------------------------------- Topic: Public EC Meeting Date: Tuesday, June 10, 2014 Time: 8:00 am, Pacific Daylight Time (San Francisco, GMT-07:00) Meeting Number: 807 111 580 Meeting Password: 6893 ------------------------------------------------------- To start or join the online meeting ------------------------------------------------------- Go to https://jcp.webex.com/ ------------------------------------------------------- Audio conference information ------------------------------------------------------- +1 (866) 682-4770 (US) Conference code: 5731908 Security code: 6893 Global access numbers

    Read the article

  • Stairway to T-SQL DML Level 4: The Mathematics of SQL: Part 1

    A relational database contains tables that relate to each other by key values. When querying data from these related tables you may choose to select data from a single table or many tables. If you select data from many tables, you normally join those tables together using specified join criteria. The concepts of selecting data from tables and joining tables together is all about managing and manipulating sets of data. In Level 4 of this Stairway we will explore the concepts of set theory and mathematical operators to join, merge, and return data from multiple SQL Server tables. Get Smart with SQL Backup Pro Powerful centralised management, encryption and more.SQL Backup Pro was the smartest kid at school Discover why.

    Read the article

  • Oracle Partner Specialists – Sell & Deliver High Value Products to Customers

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Do you want to know where to find useful information about partner training and other activities to complete Oracle Specialization available in the country you are personally based? Go to the EMEA partner enablement blog and read latest information regarding training opportunities ready to join for Cloud Services, Applications, Business Intelligence, Middleware, Database 12c, Engineered System as well as Server & Storage. Recently, we announced new TestFest events in France, which you can join to pass your own Implementation Assessment within the Specialization category you have already chosen. To find out where and when the next TestFest close to your location will take place, please contact [email protected] or watch out for further announcements of TestFest events in your home country. Turnback to the EMEA Partner Enablement Blog from time to time to update your own Specialization and join the latest training for Sales, Presales or Implementation Specialists:  https://blogs.oracle.com/opnenablement/

    Read the article

  • How could my code compliled correctly without necessary headers?

    - by ZhengZhiren
    I use the functions fork(),exec()... But how can this program compiled without including some extra headers(like sys/types.h, sys/wait.h). I use ubuntu 10.04 with gcc version 4.4.3 #include <stdio.h> #include <stdlib.h> int main() { pid_t pid; printf("before fork\n"); pid = fork(); if(pid == 0) { /*child*/ if(execvp("./cpuid", NULL)) { printf("error\n"); exit(0); } } else { if(wait(NULL) != -1) { printf("ok\n"); } } return 0; }

    Read the article

  • autocomplete-like feature with a python dict

    - by tipu
    In PHP, I had this line matches = preg_grep('/^for/', array_keys($hash)); What it would do is it would grab the words: fork, form etc. that are in $hash. In Python, I have a dict with 400,000 words. It's keys are words I'd like to present in an auto-complete like feature (the values in this case are meaningless). How would I be able to return the keys from my dictionary that match the input? For example (as used earlier), if I have my_dic = t{"fork" : True, "form" : True, "fold" : True, "fame" : True} and I get some input "for", It'll return a list of "fork", "form", "fold"

    Read the article

  • Speed up SQL Server queries with PREFETCH

    - by Akshay Deep Lamba
    Problem The SAN data volume has a throughput capacity of 400MB/sec; however my query is still running slow and it is waiting on I/O (PAGEIOLATCH_SH). Windows Performance Monitor shows data volume speed of 4MB/sec. Where is the problem and how can I find the problem? Solution This is another summary of a great article published by R. Meyyappan at www.sqlworkshops.com.  In my opinion, this is the first article that highlights and explains with working examples how PREFETCH determines the performance of a Nested Loop join.  First of all, I just want to recall that Prefetch is a mechanism with which SQL Server can fire up many I/O requests in parallel for a Nested Loop join. When SQL Server executes a Nested Loop join, it may or may not enable Prefetch accordingly to the number of rows in the outer table. If the number of rows in the outer table is greater than 25 then SQL will enable and use Prefetch to speed up query performance, but it will not if it is less than 25 rows. In this section we are going to see different scenarios where prefetch is automatically enabled or disabled. These examples only use two tables RegionalOrder and Orders.  If you want to create the sample tables and sample data, please visit this site www.sqlworkshops.com. The breakdown of the data in the RegionalOrders table is shown below and the Orders table contains about 6 million rows. In this first example, I am creating a stored procedure against two tables and then execute the stored procedure.  Before running the stored proceudre, I am going to include the actual execution plan. --Example provided by www.sqlworkshops.com --Create procedure that pulls orders based on City --Do not forget to include the actual execution plan CREATE PROC RegionalOrdersProc @City CHAR(20) AS BEGIN DECLARE @OrderID INT, @OrderDetails CHAR(200) SELECT @OrderID = o.OrderID, @OrderDetails = o.OrderDetails       FROM RegionalOrders ao INNER JOIN Orders o ON (o.OrderID = ao.OrderID)       WHERE City = @City END GO SET STATISTICS time ON GO --Example provided by www.sqlworkshops.com --Execute the procedure with parameter SmallCity1 EXEC RegionalOrdersProc 'SmallCity1' GO After running the stored procedure, if we right click on the Clustered Index Scan and click Properties we can see the Estimated Numbers of Rows is 24.    If we right click on Nested Loops and click Properties we do not see Prefetch, because it is disabled. This behavior was expected, because the number of rows containing the value ‘SmallCity1’ in the outer table is less than 25.   Now, if I run the same procedure with parameter ‘BigCity’ will Prefetch be enabled? --Example provided by www.sqlworkshops.com --Execute the procedure with parameter BigCity --We are using cached plan EXEC RegionalOrdersProc 'BigCity' GO As we can see from the below screenshot, prefetch is not enabled and the query takes around 7 seconds to execute. This is because the query used the cached plan from ‘SmallCity1’ that had prefetch disabled. Please note that even if we have 999 rows for ‘BigCity’ the Estimated Numbers of Rows is still 24.   Finally, let’s clear the procedure cache to trigger a new optimization and execute the procedure again. DBCC freeproccache GO EXEC RegionalOrdersProc 'BigCity' GO This time, our procedure runs under a second, Prefetch is enabled and the Estimated Number of Rows is 999.   The RegionalOrdersProc can be optimized by using the below example where we are using an optimizer hint. I have also shown some other hints that could be used as well. --Example provided by www.sqlworkshops.com --You can fix the issue by using any of the following --hints --Create procedure that pulls orders based on City DROP PROC RegionalOrdersProc GO CREATE PROC RegionalOrdersProc @City CHAR(20) AS BEGIN DECLARE @OrderID INT, @OrderDetails CHAR(200) SELECT @OrderID = o.OrderID, @OrderDetails = o.OrderDetails       FROM RegionalOrders ao INNER JOIN Orders o ON (o.OrderID = ao.OrderID)       WHERE City = @City       --Hinting optimizer to use SmallCity2 for estimation       OPTION (optimize FOR (@City = 'SmallCity2'))       --Hinting optimizer to estimate for the currnet parameters       --option (recompile)       --Hinting optimize not to use histogram rather       --density for estimation (average of all 3 cities)       --option (optimize for (@City UNKNOWN))       --option (optimize for UNKNOWN) END GO Conclusion, this tip was mainly aimed at illustrating how Prefetch can speed up query execution and how the different number of rows can trigger this.

    Read the article

  • Issues with ganglia and upstart in 13.04

    - by theist
    I'm having an issue with ganglia-monitor and upstart. Just after installing it starts and it cannot be stopped. I tried to solve it following http://upstart.ubuntu.com/cookbook/#expect but I can achieve upstart to track the actual pid of gmond. Assuming the configuration of the init config is wrong i followed direcetions in http://upstart.ubuntu.com/cookbook/#how-to-establish-fork-count in order to try to fix it. As it states a count of 1 means "exec fork" stanza and 2 means "exec daemon" .... gmond gives a count of 8... And of course I can get it to track the actual PID. I haven't found a bug report or something like that. Is there something I missed ? Upstart seems to be failing and the worst thing is that upstart hangs on start/stop and I cannot even uninstall or reconfigure packages.

    Read the article

  • SQL SERVER – Quiz and Video – Introduction to Hierarchical Query using a Recursive CTE

    - by pinaldave
    This blog post is inspired from SQL Queries Joes 2 Pros: SQL Query Techniques For Microsoft SQL Server 2008 – SQL Exam Prep Series 70-433 – Volume 2.[Amazon] | [Flipkart] | [Kindle] | [IndiaPlaza] This is follow up blog post of my earlier blog post on the same subject - SQL SERVER – Introduction to Hierarchical Query using a Recursive CTE – A Primer. In the article we discussed various basics terminology of the CTE. The article further covers following important concepts of common table expression. What is a Common Table Expression (CTE) Building a Recursive CTE Identify the Anchor and Recursive Query Add the Anchor and Recursive query to a CTE Add an expression to track hierarchical level Add a self-referencing INNER JOIN statement Above six are the most important concepts related to CTE and SQL Server.  There are many more things one has to learn but without beginners fundamentals one can’t learn the advanced  concepts. Let us have small quiz and check how many of you get the fundamentals right. Quiz 1) You have an employee table with the following data. EmpID FirstName LastName MgrID 1 David Kennson 11 2 Eric Bender 11 3 Lisa Kendall 4 4 David Lonning 11 5 John Marshbank 4 6 James Newton 3 7 Sally Smith NULL You need to write a recursive CTE that shows the EmpID, FirstName, LastName, MgrID, and employee level. The CEO should be listed at Level 1. All people who work for the CEO will be listed at Level 2. All of the people who work for those people will be listed at Level 3. Which CTE code will achieve this result? WITH EmpList AS (SELECT Boss.EmpID, Boss.FName, Boss.LName, Boss.MgrID, 1 AS Lvl FROM Employee AS Boss WHERE Boss.MgrID IS NULL UNION ALL SELECT E.EmpID, E.FirstName, E.LastName, E.MgrID, EmpList.Lvl + 1 FROM Employee AS E INNER JOIN EmpList ON E.MgrID = EmpList.EmpID) SELECT * FROM EmpList WITH EmpListAS (SELECT EmpID, FirstName, LastName, MgrID, 1 as Lvl FROM Employee WHERE MgrID IS NULL UNION ALL SELECT EmpID, FirstName, LastName, MgrID, 2 as Lvl ) SELECT * FROM BossList WITH EmpList AS (SELECT EmpID, FirstName, LastName, MgrID, 1 as Lvl FROM Employee WHERE MgrID is NOT NULL UNION SELECT EmpID, FirstName, LastName, MgrID, BossList.Lvl + 1 FROM Employee INNER JOIN EmpList BossList ON Employee.MgrID = BossList.EmpID) SELECT * FROM EmpList 2) You have a table named Employee. The EmployeeID of each employee’s manager is in the ManagerID column. You need to write a recursive query that produces a list of employees and their manager. The query must also include the employee’s level in the hierarchy. You write the following code segment: WITH EmployeeList (EmployeeID, FullName, ManagerName, Level) AS ( –PICK ANSWER CODE HERE ) SELECT EmployeeID, FullName, ” AS [ManagerID], 1 AS [Level] FROM Employee WHERE ManagerID IS NULL UNION ALL SELECT emp.EmployeeID, emp.FullName mgr.FullName, 1 + 1 AS [Level] FROM Employee emp JOIN Employee mgr ON emp.ManagerID = mgr.EmployeeId SELECT EmployeeID, FullName, ” AS [ManagerID], 1 AS [Level] FROM Employee WHERE ManagerID IS NULL UNION ALL SELECT emp.EmployeeID, emp.FullName, mgr.FullName, mgr.Level + 1 FROM EmployeeList mgr JOIN Employee emp ON emp.ManagerID = mgr.EmployeeId Now make sure that you write down all the answers on the piece of paper. Watch following video and read earlier article over here. If you want to change the answer you still have chance. Solution 1) 1 2) 2 Now compare let us check the answers and compare your answers to following answers. I am very confident you will get them correct. Available at USA: Amazon India: Flipkart | IndiaPlaza Volume: 1, 2, 3, 4, 5 Please leave your feedback in the comment area for the quiz and video. Did you know all the answers of the quiz? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • I love it when a plan comes together

    - by DavidWimbush
    I'm currently working on an application so that our Marketing department can produce most of their own mailing lists without my having to get involved. It was all going well until I got stuck on the bit where the actual SQL query is generated but a rummage in Books Online revealed a very clean solution using some constructs that I had previously dismissed as pointless. Typically we want to email every customer who is in any of the following n groups. Experience shows that a group has the following definition: <people who have done A> [(AND <people who have done B>) | (OR <people who have done C>)] [APART FROM <people who have done D>] When doing these by hand I've been using INNER JOIN for the AND, UNION for the OR, and LEFT JOIN + WHERE D IS NULL for the APART FROM. This would produce two quite different queries: -- Old OR select  A.PersonID from  (   -- A   select  PersonID   from  ...   union  -- OR   -- C   select  PersonID   from  ...   ) AorC   left join  -- APART FROM   (   select  PersonID   from  ...   ) D on D.PersonID = AorC.PersonID where  D.PersonID is null -- Old AND select  distinct main.PersonID from  (   -- A   select  PersonID   from  ...   ) A   inner join  -- AND   (   -- B   select  PersonID   from  ...   ) B on B.PersonID = A.PersonID   left join  -- APART FROM   (   select  PersonID   from  ...   ) D on D.PersonID = A.PersonID where  D.PersonID is null But when I tried to write the code that can generate the SQL for any combination of those (along with all the variables controlling what each SELECT did and what was in all the optional bits of each WHERE clause) my brain started to hurt. Then I remembered reading about the (then new to me) keywords INTERSECT and EXCEPT. At the time I couldn't see what they added but I thought I would have a play and see if they might help. They were perfect for this. Here's the new query structure: -- The way forward select  PersonID from  (     (       (       -- A       select  PersonID       from  ...       )       union      -- OR        intersect  -- AND       (       -- B/C       select  PersonID       from  ...       )     )     except     (     -- D     select  PersonID     from  ...     )   ) x I can easily swap between between UNION and INTERSECT, and omit B, C, or D as necessary. Elegant, clean and readable - pick any 3! Sometimes it really pays to read the manual.

    Read the article

  • Flex : Adobe veut faire don du SDK à l'open-source et crée la confusion dans sa communauté de développeurs

    Adobe veut faire don du SDK de Flex à l'open-source Et crée la confusion dans sa communauté de développeurs Le moins que l'on puisse dire est que la situation est confuse. Après avoir annoncé la fin de Flash dans l'univers des mobiles, Adobe vient de lancer une deuxième déclaration qui met ses développeurs en émoi : le SDK de Flex sera donné à la communauté open-source. Dans un premier temps, ce SDK sera confié à un organisme, baptisé Open Spoon Foundation, en partie chapeauté par Adobe. Le nom de cette fondation est un jeu de mot entre Spoon (cuillère) et Fork (fourchette), « fork » étant l'appellation usuelle pour des dé...

    Read the article

  • Future of a ServiceStack based Solution in the Context of Licensing

    - by Harindaka
    I just want someone to clarify the following questions as Demis Bellot had announced a couple of weeks ago that ServiceStack would go commercial. Refer link below. https://plus.google.com/app/basic/stream/z12tfvoackvnx1xzd04cfrirpvybu1nje54 (Please note that when I say ServiceStack or SS I refer to all associated SS libraries such as ServiceStack.Text, etc.) If I have a solution already developed using ServiceStack today will I have to purchase a license once SS goes commercial even if I don't upgrade the SS binaries to the commercial release version? Will previous versions of SS (prior to commercial licensing) always be opensource and use the same license as before? If I fork SS today (prior to commercial licensing) on Github, would it be illegal to maintain that after SS goes commercial? If the answer to question 2 is yes, then would I still be able to fork a previous version after SS goes commercial without worrying about the commercial license (all the while maintaining and releasing the source to the public)?

    Read the article

  • Library and several small programs that use it: how should I structure my git repository?

    - by Dan
    I have some code that uses a library that I and others frequently modify (usually only by adding functions and methods). We each keep a local fork of the library for our own use. I also have a lot of small "driver" programs (~100 lines) that use the library and are used exclusively by me. Currently, I have both the driver programs and the library in the same repository, because I frequently make changes to both that are logically connected (adding a function to the library and then calling it). I'd like to merge my fork of the library with my co-workers' forks, but I don't want the driver programs to be part of the merged library. What's the best way to organize the git repositories for a large, shared library that needs to be merged frequently and a number of small programs that have changes that are connected to changes in the library?

    Read the article

  • JDK 7 In Action - Learn With Java Tutorials and Developer Guides

    - by sowmya
    At JavaOne 2012, Stuart Marks, Mike Duigou, and Joe Darcy gave a presentation about JDK 7 In Action. Learn more about using JDK 7 features with the help of Java Tutorials and JDK 7 Developer Guides. Links to relevant information are provided below. If you are considering moving to JDK 7 from a previous release, the JDK 7 Release Notes and JDK 7 Adoption Guide are great resources. Project Coin Features Improved Literals * Literals section in Primitive Datatypes topic. * Binary Literals * Underscores in Numeric Literals Strings In Switch * Strings In Switch Diamond * Type Inference for Generic Instance Creation * Type Inference and Instantiation of Generic Classes Multi-catch and Precise Throw * Catching Multiple Exception Types and Rethrowing Exceptions with Improved Type Checking * Catch Blocks Try-with-resources * The try-with-resources Statement NIO.2 File System API * File I/O for information on path, files, change notification, and more * Zip File System Provider * Zip File System Provider * Developing a Custom File System Provider Fork Join Framework * Fork/Join - Sowmya

    Read the article

  • I have a library and several small programs that use it: how should I structure my git repositories?

    - by Dan
    I have some code that uses a library that I and others frequently modify (usually only by adding functions and methods). We each keep a local fork of the library for our own use. I also have a lot of small "driver" programs (~100 lines) that use the library and are used exclusively by me. Currently, I have both the driver programs and the library in the same repository, because I frequently make changes to both that are logically connected (adding a function to the library and then calling it). I'd like to merge my fork of the library with my co-workers' forks, but I don't want the driver programs to be part of the merged library. What's the best way to organize the git repositories for a large, shared library that needs to be merged frequently and a number of small programs that have changes that are connected to changes in the library?

    Read the article

  • SQL to select random mix of rows fairly [migrated]

    - by Matt Sieker
    Here's my problem: I have a set of tables in a database populated with data from a client that contains product information. In addition to the basic product information, there is also information about the manufacturer, and categories for those products (a product can be in one or more categories). These categories are then referred to as "Product Categories", and which stores these products are available at. These tables are updated once a week from a feed from the customer. Since for our purposes, some of the product categories are the same, or closely related for our purposes, there is another level of categories called "General Categories", a general category can have one or more product categories. For the scope of these tables, here's some rough numbers: Data Tables: Products: 475,000 Manufacturers: 1300 Stores: 150 General Categories: 245 Product Categories: 500 Mapping Tables: Product Category -> Product: 655,000 Stores -> Products: 50,000,000 Now, for the actual problem: As part of our software, we need to select n random products, given a store and a general category. However, we also need to ensure a good mix of manufacturers, as in some categories, a single manufacturer dominates the results, and selecting rows at random causes the results to strongly favor that manufacturer. The solution that is currently in place, works for most cases, involves selecting all of the rows that match the store and category criteria, partition them on manufacturer, and include their row number from within their partition, then select from that where the row number for that manufacturer is less than n, and use ROWCOUNT to clamp the total rows returned to n. This query looks something like this: SET ROWCOUNT 6 select p.Id, GeneralCategory_Id, Product_Id, ISNULL(m.DisplayName, m.Name) AS Vendor, MSRP, MemberPrice, FamilyImageName from (select p.Id, gc.Id GeneralCategory_Id, p.Id Product_Id, ctp.Store_id, Manufacturer_id, ROW_NUMBER() OVER (PARTITION BY Manufacturer_id ORDER BY NEWID()) AS 'VendorOrder', MSRP, MemberPrice, FamilyImageName from GeneralCategory gc inner join GeneralCategoriesToProductCategories gctpc ON gc.Id=gctpc.GeneralCategory_Id inner join ProductCategoryToProduct pctp on gctpc.ProductCategory_Id = pctp.ProductCategory_Id inner join Product p on p.Id = pctp.Product_Id inner join StoreToProduct ctp on p.Id = ctp.Product_id where gc.Id = @GeneralCategory and ctp.Store_id=@StoreId and p.Active=1 and p.MemberPrice >0) p inner join Manufacturer m on m.Id = p.Manufacturer_id where VendorOrder <=6 order by NEWID() SET ROWCOUNT 0 (I've tried to somewhat format it to make it cleaner, but I don't think it really helps) Running this query with an execution plan shows that for the majority of these tables, it's doing a Clustered Index Seek. There are two operations that take up roughly 90% of the time: Index Seek (Nonclustered) on StoreToProduct: 17%. This table just contains the key of the store, and the key of the product. It seems that NHibernate decided not to make a composite key when making this table, but I'm not concerned about this at this point, as compared to the other seek... Clustered Index Seek on Product: 69%. I really have no clue how I could make this one more performant. On categories without a lot of products, performance is acceptable (<50ms), however larger categories can take a few hundred ms, with the largest category taking 3s (which has about 170k products). It seems I have two ways to go from this point: Somehow optimize the existing query and table indices to lower the query time. As almost every expensive operation is already a clustered index scan, I don't know what could be done there. The inner query could be tuned to not return all of the possible rows for that category, but I am unsure how to do this, and maintain the requirements (random products, with a good mix of manufacturers) Denormalize this data for the purpose of this query when doing the once a week import. However, I am unsure how to do this and maintain the requirements. Does anyone have any input on either of these items?

    Read the article

  • Document Foundation : pas de LibreOffice OnLine cette année mais une feuille de route officielle, le portage pour Android progresse

    La Document Foundation : pas de LibreOffice OnLine cette année Mais un feuille de route officielle, le portage pour Android progresse Edit 16h : La Document Foundation vient de corriger une confusion dans la communication autour des déclinaisons Cloud et mobiles de son fork d'OpenOffice.org. Elle annonce une feuille de route. La Document Foundation a posé hier la dernière pierre de la dernière version de la branche 3.4 de sa suite bureautique open-source. Résultat, son énergie va à présent être entièrement tournée vers la branche 3.5 de LibreOffice. Et vers les nouvelles déclinaisons de son fork d'OpenOffice.org. La première...

    Read the article

  • Document Foundation : pas de LibreOffice OnLine cette année mais un feuille de route officielle, le portage pour Android progresse

    La Document Foundation : pas de LibreOffice OnLine cette année Mais un feuille de route officielle, le portage pour Android progresse Edit 16h : La Document Foundation vient de corriger une confusion dans la communication autour des déclinaisons Cloud et mobiles de son fork d'OpenOffice.org. Elle annonce une feuille de route. La Document Foundation a posé hier la dernière pierre de la dernière version de la branche 3.4 de sa suite bureautique open-source. Résultat, son énergie va à présent être entièrement tournée vers la branche 3.5 de LibreOffice. Et vers les nouvelles déclinaisons de son fork d'OpenOffice.org. La première...

    Read the article

  • Github Feed affecting my WordPress installation? [on hold]

    - by saul
    Any idea how this fork is affecting my site? I went to verify my website log stats, and realized this may be the cause of a strange redirect constantly happening on my WordPress installation. Here's a line I found on my log: 54.81.91.95 - - [07/May/2014:22:52:08 -0400] "GET /category/selfie/feed/ HTTP/1.1" 200 1826 "-" "feedzirra http://github.com/pauldix/feedzirra/tree/master" And this is the Github fork (or however these are called). https://github.com/feedjira/feedjira/tree/master Basically, I think everytime I update my categories, (selfie in this case), I get redirected to install.php. Probably by triggering some GET function on that feed. to the best of my knowledge, this feed parses all url with this structure, blocking them, kind of like a DDoS attack?? Any ideas how to go about it??

    Read the article

  • Forked a project, where do my version numbers start?

    - by TheLQ
    I have forked a project and have changed lots of it. This fork isn't just a small feature change here and a buried bug fix there, its a pretty substantial change. Only most of the core code is shared. I forked this project at v2.5.0. For a while I've started versioning my fork at v3.0 . However I'm not sure if this is the right way, mainly because when that project hits v3.0, things get confusing. But I don't want to start over at v1.0 or v0.1 because that implies infancy, instability, and non-refindness of a project. This isn't true, as most of the core code is very refined and stable. I'm really lost on what to do, so I ask here: Whats the standard way to deal with this kind of situation? Do most forks start over again, bump up version numbers, or do something else that I'm not aware of.

    Read the article

  • git commit –m “CodePlex now supports Git!”

    Finally, yes, CodePlex now supports Git! Git has been one of the top rated requests from the CodePlex community for some time: Admittedly, when we launched CodePlex, we never expected that at some point we would be running a source control system originally invented by Linus Torvalds to use for the Linux kernel. Though I would also say, nobody would have thought the open source ecosystem would be as important to Microsoft as it has become now. Giving CodePlex users what they ask for and supporting their open source efforts has always been important to us, and we have a long list of improvements planned, so stay tuned as we have more up our sleeves! Why Git? So why Git? CodePlex already has Mercurial for distributed version control and TFS (which also supports subversion clients) for centralized version control. The short answer is that the CodePlex community voted, loud and clear, that Git support was critical. Additionally, we just like it, we use Git on our team every day and making the DVCS workflows more available to the CodePlex community is just the right thing to do. Forks and Pull Requests One of the capabilities that distributed version control systems, such as Mercurial and Git, enable is the Fork and Pull Request workflow.  Just like with Mercurial, projects configured to use Git enable Forking the source and submitting contributions back via Pull Requests. The Fork/Pull Request workflow is a key accelerator to many open source projects and you will see improvements in our support coming later this year. More Choice With the addition of Git, now CodePlex has three options when it comes to Open Source project hosting. Projects can now select between TFS, Mercurial, and Git. Each developer has their own preferences, and for some, centralized version control makes more sense to them. For others, DVCS is the only way to go. We’re equally committed to supporting both these technologies for our users. You can get started today by creating a new project or contribute to an existing project by creating a fork. For help on getting started with Git on CodePlex, see our help documentation here. If you would like to switch your project to use Git, please contact us at CodePlex Support with your project information, and we will be happy to help you out. We're Listening CodePlex is your community, and we want to deliver the experiences you need to have a successful open source project. We want your ideas and feedback to make CodePlex a great development community.  The issue tracker on CodePlex is publicly available. Add suggestions or vote up existing suggestions. And you can always find us on Twitter, I’m @mgroves84; follow us to keep up to date with our latest releases: @codeplex

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >