Search Results

Search found 8547 results on 342 pages for 'hash join'.

Page 82/342 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Is there a way to give a subquery an alias in Oracle 10g SQL?

    - by Matt Pascoe
    Is there a way to give a subquery in Oracle 11g an alias like: select * from (select client_ref_id, request from some_table where message_type = 1) abc, (select client_ref_id, response from some_table where message_type = 2) defg where abc.client_ref_id = def.client_ref_id; Otherwise is there a way to join the two subqueries based on the client_ref_id. I realize there is a self join, but on the database I am running on a self join can take up to 5 min to complete (there is some extra logic in the actual query I am running but I have determined the self join is what is causing the issue). The individual subqueries only take a few seconds to complete by them selves. The self join query looks something like: select st.request, st1.request from some_table st, some_table st1 where st.client_ref_id = st1.client_ref_id;

    Read the article

  • Is there a way to give a subquery an alias in Oracle 11g SQL?

    - by Matt Pascoe
    Is there a way to give a subquery in Oracle 11g an alias like: select * from (select client_ref_id, request from some_table where message_type = 1) abc, (select client_ref_id, response from some_table where message_type = 2) defg where abc.client_ref_id = def.client_ref_id; Otherwise is there a way to join the two subqueries based on the client_ref_id. I realize there is a self join, but on the database I am running on a self join can take up to 5 min to complete (there is some extra logic in the actual query I am running but I have determined the self join is what is causing the issue). The individual subqueries only take a few seconds to complete by them selves. The self join query looks something like: select st.request, st1.request from some_table st, some_table st1 where st.client_ref_id = st1.client_ref_id;

    Read the article

  • Getting counts of 0 from a query with a double group by

    - by Maltiriel
    I'm trying to write a query that gets the counts for a table (call it item) categorized by two different things, call them type and code. What I'm hoping for as output is the following: Type Code Count 1 A 3 1 B 0 1 C 10 2 A 0 2 B 13 2 C 2 And so forth. Both type and code are found in lookup tables, and each item can have just one type but more than one code, so there's also a pivot (aka junction or join) table for the codes. I have a query that can get this result: Type Code Count 1 A 3 1 C 10 2 B 13 2 C 2 and it looks like (with join conditions omitted): SELECT typelookup.name, codelookup.name, COUNT(item.id) FROM typelookup LEFT OUTER JOIN item JOIN itemcodepivot RIGHT OUTER JOIN codelookup GROUP BY typelookup.name, codelookup.name Is there any way to alter this query to get the results I'm looking for? This is in MySQL, if that matters. I'm not actually sure this is possible all in one query, but if it is I'd really like to know how. Thanks for any ideas.

    Read the article

  • T-SQL Getting duplicate rows returned

    - by cBlaine
    The following code section is returning multiple columns for a few records. SELECT a.ClientID,ltrim(rtrim(c.FirstName)) + ' ' + case when c.MiddleName <> '' then ltrim(rtrim(c.MiddleName)) + '. ' else '' end + ltrim(rtrim(c.LastName)) as ClientName, a.MISCode, b.Address, b.City, dbo.ClientGetEnrolledPrograms(CONVERT(int,a.ClientID)) as Abbreviation FROM ClientDetail a JOIN Address b on(a.PersonID = b.PersonID) JOIN Person c on(a.PersonID = c.PersonID) LEFT JOIN ProgramEnrollments d on(d.ClientID = a.ClientID and d.Status = 'Enrolled' and d.HistoricalPKID is null) LEFT JOIN Program e on(d.ProgramID = e.ProgramID and e.HistoricalPKID is null) WHERE a.MichiganWorksData=1 I've isolated the issue to the ProgramEnrollments table. This table holds one-to-many relationships where each ClientID can be enrolled in many programs. So for each program a client is enrolled in, there is a record in the table. The final result set is therefore returning a row for each row in the ProgramEnrollments table based on these joins. I presume my join is the issue but I don't see the problem. Thoughts/Suggestions? Thanks, Chuck

    Read the article

  • Random Record from Recordset

    - by Tony Hanks
    I have the fillowing Query: SELECT a.*, ps4_media.filename, ps4_galleries.name as galleryname, ps4_media_iptc.description, ps4_media_iptc.title, ps4_media_iptc.headline, ps4_media.date_added, ps4_galleries.created, ps4_folders.name as foldername, ps4_galleries.gallery_count FROM ps4_media_galleries a INNER JOIN (SELECT ps4_media_galleries.gallery_id, min(ps4_media_galleries.gmedia_id) AS minID FROM ps4_media_galleries GROUP BY ps4_media_galleries.gallery_id) b ON a.gallery_id = b.gallery_id AND a.gmedia_id = b.minID INNER JOIN ps4_media ON ps4_media.media_id = a.gmedia_id INNER JOIN ps4_folders ON ps4_folders.folder_id = ps4_media.folder_id INNER JOIN ps4_galleries ON ps4_galleries.gallery_id = a.gallery_id INNER JOIN ps4_media_iptc ON ps4_media_iptc.media_id = a.gmedia_id ORDER BY ps4_galleries.created DESC How do I get ps4_media.filename to be random, everything else is fine just want the thumbnail which ps4_media.filename is to be and of the records in the found set.

    Read the article

  • need help in aggregate select

    - by eugeneK
    Hi, i have a problem with selecting some values from my DB. DB is in design stages so i can redesign it a bit of needed. You can see the Diagram on this image Basically what i want to select is select c.campaignID, ct.campaignTypeName, c.campaignName, c.campaignDailyBudget, c.campaignTotalBudget, c.campaignCPC, c.date, cs.campaignStatusName ***impressions, ***clicks, ***cast(campaignTotalBudget-(clicks*campaignCPC) as decimal(18,1)) as remainingFunds from Campaigns as c left join CampaignTypes as ct on c.campaignTypeID=ct.campaignTypeID left join CampaignStatuses as cs on c.campaignStatusID=cs.campaignStatusID left join CampaignVariants as cv on c.campaignID=cv.campaignID left join CampaignVariants2Visitors as c2v on cv.campaignVariantID=c2v.campaignVariantID left join Visitors as v on c2v.visitorID=v.visitorID ..... order by c.campaignID desc Problem is that Visitors table has column named isClick so i don't know the way to separate what is impression with isClick=false and what is click isClick=true so i can show nice form with all the stuff about campaign and visitors... I don't think to split Visitors to two tables like Impressions and Click is a good idea because again i would need to have Visitors with two more tables thanks

    Read the article

  • python -> combinations of numbers and letters

    - by tekknolagi
    #!/usr/bin/python import random lower_a = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z'] upper_a = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z'] num = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'] all = [] all = " ".join("".join(lower_a) + "".join(upper_a) + "".join(num)) all = all.split() x = 1 c = 1 while x < 10: y = [] for i in range(c): a = random.choice(all) y.append(a) print "".join(y) x += 1 c += 1 what i have now outputs something like the following: 5 hE HAy 1kgy Pt6JM 2pFuCb Jv5osaX 5q8PwWAO SvHWRKfI5 how can i make it systematically go through every combination of letters (upper and lowercase) for a given length, then add 1 to that length and repeat the process?

    Read the article

  • ERROR: there is no parameter $1 when creating view

    - by idlemoments
    When we try to create a view within a funcion we get ERROR: there is no parameter $1. This is the sample code. Begin CREATE VIEW artikelnr AS SELECT datum, 'uitgifte' as "type", CASE WHEN 'test'='test' THEN 0 END as "aantal ontvangen", aantal as "aantal uitgegeven" FROM uitgifteregel JOIN artikel ON artikel.artikelnr = new.artikelnr JOIN uitgifte ON uitgifte.uitgiftenr = uitgifteregel.uitgiftenr UNION SELECT datum, 'ontvangst' as "type", aantal as "aantal ontvangen" , CASE WHEN 'test'='test' THEN 0 END as "aantal uitgegeven" FROM ontvangstregel JOIN artikel ON artikel.artikelnr = new.artikelnr JOIN ontvangst ON ontvangst.ontvangstnr = ontvangstregel.ontvangstnr; Return new; end; When we replace new.artikelnr on line 7 with value 1 it works like it should, but the function needs to work with different artikelnr's. example line 7: JOIN artikel ON artikel.artikelnr = new.artikelnr Please point us in the right direction.

    Read the article

  • More SQL Smells

    - by Nick Harrison
    Let's continue exploring some of the SQL Smells from Phil's list. He has been putting together. Datatype mis-matches in predicates that rely on implicit conversion.(Plamen Ratchev) This is a great example poking holes in the whole theory of "If it works it's not broken" Queries will this probably will generally work and give the correct response. In fact, without careful analysis, you probably may be completely oblivious that there is even a problem. This subtle little problem will needlessly complicate queries and slow them down regardless of the indexes applied. Consider this example: CREATE TABLE [dbo].[Page](     [PageId] [int] IDENTITY(1,1) NOT NULL,     [Title] [varchar](75) NOT NULL,     [Sequence] [int] NOT NULL,     [ThemeId] [int] NOT NULL,     [CustomCss] [text] NOT NULL,     [CustomScript] [text] NOT NULL,     [PageGroupId] [int] NOT NULL;  CREATE PROCEDURE PageSelectBySequence ( @sequenceMin smallint , @sequenceMax smallint ) AS BEGIN SELECT [PageId] , [Title] , [Sequence] , [ThemeId] , [CustomCss] , [CustomScript] , [PageGroupId] FROM [CMS].[dbo].[Page] WHERE Sequence BETWEEN @sequenceMin AND @SequenceMax END  Note that the Sequence column is defined as int while the sequence parameter is defined as a small int. The problem is that the database may have to do a lot of type conversions to evaluate the query. In some cases, this may even negate the indexes that you have in place. Using Correlated subqueries instead of a join   (Dave_Levy/ Plamen Ratchev) There are two main problems here. The first is a little subjective, since this is a non-standard way of expressing the query, it is harder to understand. The other problem is much more objective and potentially problematic. You are taking much of the control away from the optimizer. Written properly, such a query may well out perform a corresponding query written with traditional joins. More likely than not, performance will degrade. Whenever you assume that you know better than the optimizer, you will most likely be wrong. This is the fundmental problem with any hint. Consider a query like this:  SELECT Page.Title , Page.Sequence , Page.ThemeId , Page.CustomCss , Page.CustomScript , PageEffectParams.Name , PageEffectParams.Value , ( SELECT EffectName FROM dbo.Effect WHERE EffectId = dbo.PageEffects.EffectId ) AS EffectName FROM Page INNER JOIN PageEffect ON Page.PageId = PageEffects.PageId INNER JOIN PageEffectParam ON PageEffects.PageEffectId = PageEffectParams.PageEffectId  This can and should be written as:  SELECT Page.Title , Page.Sequence , Page.ThemeId , Page.CustomCss , Page.CustomScript , PageEffectParams.Name , PageEffectParams.Value , EffectName FROM Page INNER JOIN PageEffect ON Page.PageId = PageEffects.PageId INNER JOIN PageEffectParam ON PageEffects.PageEffectId = PageEffectParams.PageEffectId INNER JOIN dbo.Effect ON dbo.Effects.EffectId = dbo.PageEffects.EffectId  The correlated query may just as easily show up in the where clause. It's not a good idea in the select clause or the where clause. Few or No comments. This one is a bit more complicated and controversial. All comments are not created equal. Some comments are helpful and need to be included. Other comments are not necessary and may indicate a problem. I tend to follow the rule of thumb that comments that explain why are good. Comments that explain how are bad. Many people may be shocked to hear the idea of a bad comment, but hear me out. If a comment is needed to explain what is going on or how it works, the logic is too complex and needs to be simplified. Comments that explain why are good. Comments may explain why the sql is needed are good. Comments that explain where the sql is used are good. Comments that explain how tables are related should not be needed if the sql is well written. If they are needed, you need to consider reworking the sql or simplify your data model. Use of functions in a WHERE clause. (Anil Das) Calling a function in the where clause will often negate the indexing strategy. The function will be called for every record considered. This will often a force a full table scan on the tables affected. Calling a function will not guarantee that there is a full table scan, but there is a good chance that it will. If you find that you often need to write queries using a particular function, you may need to add a column to the table that has the function already applied.

    Read the article

  • Scripting out Contained Database Users

    - by Argenis
      Today’s blog post comes from a Twitter thread on which @SQLSoldier, @sqlstudent144 and @SQLTaiob were discussing the internals of contained database users. Unless you have been living under a rock, you’ve heard about the concept of contained users within a SQL Server database (hit the link if you have not). In this article I’d like to show you that you can, indeed, script out contained database users and recreate them on another database, as either contained users or as good old fashioned logins/server principals as well. Why would this be useful? Well, because you would not need to know the password for the user in order to recreate it on another instance. I know there is a limited number of scenarios where this would be necessary, but nonetheless I figured I’d throw this blog post to show how it can be done. A more obscure use case: with the password hash (which I’m about to show you how to obtain) you could also crack the password using a utility like hashcat, as highlighted on this SQLServerCentral article. The Investigation SQL Server uses System Base Tables to save the password hashes of logins and contained database users. For logins it uses sys.sysxlgns, whereas for contained database users it leverages sys.sysowners. I’ll show you what I do to figure this stuff out: I create a login/contained user, and then I immediately browse the transaction log with, for example, fn_dblog. It’s pretty obvious that only two base tables touched by the operation are sys.sysxlgns, and also sys.sysprivs – the latter is used to track permissions. If I connect to the DAC on my instance, I can query for the password hash of this login I’ve just created. A few interesting things about this hash. This was taken on my laptop, and I happen to be running SQL Server 2014 RTM CU2, which is the latest public build of SQL Server 2014 as of time of writing. In 2008 R2 and prior versions (back to 2000), the password hashes would start with 0x0100. The reason why this changed is because starting with SQL Server 2012 password hashes are kept using a SHA512 algorithm, as opposed to SHA-1 (used since 2000) or Snefru (used in 6.5 and 7.0). SHA-1 is nowadays deemed unsafe and is very easy to crack. For regular SQL logins, this information is exposed through the sys.sql_logins catalog view, so there is really no need to connect to the DAC to grab an SID/password hash pair. For contained database users, there is (currently) no method of obtaining SID or password hashes without connecting to the DAC. If we create a contained database user, this is what we get from the transaction log: Note that the System Base Table used in this case is sys.sysowners. sys.sysprivs is used as well, and again this is to track permissions. To query sys.sysowners, you would have to connect to the DAC, as I mentioned previously. And this is what you would get: There are other ways to figure out what SQL Server uses under the hood to store contained database user password hashes, like looking at the execution plan for a query to sys.dm_db_uncontained_entities (Thanks, Robert Davis!) SIDs, Logins, Contained Users, and Why You Care…Or Not. One of the reasons behind the existence of Contained Users was the concept of portability of databases: it is really painful to maintain Server Principals (Logins) synced across most shared-nothing SQL Server HA/DR technologies (Mirroring, Availability Groups, and Log Shipping). Often times you would need the Security Identifier (SID) of these logins to match across instances, and that meant that you had to fetch whatever SID was assigned to the login on the principal instance so you could recreate it on a secondary. With contained users you normally wouldn’t care about SIDs, as the users are always available (and synced, as long as synchronization takes place) across instances. Now you might be presented some particular requirement that might specify that SIDs synced between logins on certain instances and contained database users on other databases. How would you go about creating a contained database user with a specific SID? The answer is that you can’t do it directly, but there’s a little trick that would allow you to do it. Create a login with a specified SID and password hash, create a user for that server principal on a partially contained database, then migrate that user to contained using the system stored procedure sp_user_migrate_to_contained, then drop the login. CREATE LOGIN <login_name> WITH PASSWORD = <password_hash> HASHED, SID = <sid> ; GO USE <partially_contained_db>; GO CREATE USER <user_name> FROM LOGIN <login_name>; GO EXEC sp_migrate_user_to_contained @username = <user_name>, @rename = N’keep_name’, @disablelogin = N‘disable_login’; GO DROP LOGIN <login_name>; GO Here’s how this skeleton would look like in action: And now I have a contained user with a specified SID and password hash. In my example above, I renamed the user after migrated it to contained so that it is, hopefully, easier to understand. Enjoy!

    Read the article

  • Best way to store a large amount of game objects and update the ones onscreen

    - by user3002473
    Good afternoon guys! I'm a young beginner game developer working on my first large scale game project and I've run into a situation where I'm not quite sure what the best solution may be (if there is a lone solution). The question may be vague (if anyone can think of a better title after having read the question, please edit it) or broad but I'm not quite sure what to do and I thought it would help just to discuss the problem with people more educated in the field. Before we get started, here are some of the questions I've looked at for help in the past: Best way to keep track of game objects Elegant way to simulate large amounts of entities within a game world What is the most efficient container to store dynamic game objects in? I've also read articles about different data structures commonly used in games to store game objects such as this one about slot maps, but none of them are really what I'm looking for. Also, if it helps at all I'm using Python 3 to design the game. It has to be Python 3, if I could I would use C++ or Unityscript or something else, but I'm restricted to having to use Python 3. My game will be a form of side scroller shooter game. In said game the player will traverse large rooms with large amounts of enemies and other game objects to update (think some of the larger areas in Cave Story or Iji). The player obviously can't see the entire room all at once, so there is a viewport that follows the player around and renders only a selection of the room and the game objects that it contains. This is not a foreign concept. The part that's getting me confused has to do with how certain game objects are updated. Some of them are to be updated constantly, regardless of whether or not they can be seen. Other objects however are only to be updated when they are onscreen (for example, an enemy would only be updated to react to the player when it is onscreen or when it is in a certain range of the screen). Another problem is that game objects have to be easily referable by other game objects; something that happens in the player's update() method may affect another object in the world. Collision detection in games is always a serious problem. I need a way of containing the game objects such that it minimizes the number of cases when testing for collisions against one another. The final problem is that of creating and destroying game objects. I think this problem is pretty self explanatory. To store the game objects then I've considered a number of different methods. The original method I had was to simply store all the objects in a hash table by an id. This method was simple, and decently fast as it allows all the objects to be looked up in O(1) complexity, and also allows them to be deleted fairly easily. Hash collisions would not be a major problem; I wasn't originally planning on using computer generated ids to store the game objects I was going to rely on them all using ids given to them by the game designer (such names would be strings like 'Player' or 'EnemyWeapon4'), and even if I did use computer generated ids, if I used a decent hashing algorithm then the chances of collisions would be around 1 in 4 billion. The problem with using a hash table however is that it is inefficient in checking to see what objects are in range of the viewport. Considering the fact that certain game objects move (as well as the viewport itself), the only solution I could think of in order to only update objects that are in the viewport would be to iterate through every object in the hash table and check if it is in the viewport or not, updating only the ones that are in the valid area. This would be incredibly slow in scenarios where the amount of game objects exceeds 500, or even 200. The second solution was to store everything in a 2-d list. The world is partitioned up into cells (a tilemap essentially), where each cell or tile is the same size and is square. Each cell would contain a list of the game objects that are currently occupying it (each game object would be inserted into a cell depending on the center of the object's collision mask). A 2-d list would allow me to take the top-left and bottom-right corners of the viewport and easily grab a rectangular area of the grid containing only the cells containing entities that are in valid range to be updated. This method also solves the problem of collision detection; when I take an entity I can find the cell that it is currently in, then check only against entities in it's cell and the 8 cells around it. One problem with this system however is that it prohibits easy lookup of game objects. One solution I had would be to simultaneously keep a hash table that would contain all the positions of the objects in the 2-d list indexed by the id of said object. The major problem with a 2-d list is that it would need to be rebuilt every single game frame (along with the hash table of object positions), which may be a serious detriment to game speed. Both systems have ups and downs and seem to solve some of each other's problems, however using them both together doesn't seem like the best solution either. If anyone has any thoughts, ideas, suggestions, comments, opinions or solutions on new data structures or better implementations of the existing data structures I have in mind, please post, any and all criticism and help is welcome. Thanks in advance! EDIT: Please don't close the question because it has a bad title, I'm just bad with names!

    Read the article

  • 101 Ways to Participate...and make the future Java

    - by heathervc
     In case you missed it earlier today, and as promised in BOF6283, here are the 101 Ways to Improve (and Make the Future) Java...thanks to Bruno Souza of SouJava and Martijn Verburg of the London Java Community for their contributions! Join or create a JUG Come to the meetings Help promoting your JUG: twitter, facebook, etc Find someone that can give a talk Get your company to sponsor (a meeting, an event) Organize an activity (meetings, hackathons, dojos, etc) Answer questions on a mailing list (or simply join!) Volunteer for a small, one time tasks (creating a web page, helping with an activity) Come early to an event, and help to carry the piano Moderate a list or add things to the wiki Participate in the organization meetings or mailing lists Take pictures of an event or meeting and publish them online Write a blog about an event or meeting, to help promote the group Help record and post a session online Present your JavaOne experience when you get back Repeat the best talk you saw at JavaOne at a JUG meeting Send this list of ideas to other Java developers in your area so they can help out too! Present a step-by-step tutorial Present GreenFoot and Alice to school students Present BlueJ and Alice to university students Teach those tools to teachers and professors Write a step-by-step tutorial on your blog or to a magazine Create a page that lists resources Give a talk about your favorite Java feature or technology Learn a new Java API and present to your co-workers Then, present in a JUG meeting, and then, present it in an event in your area, and submit it to JavaOne! Create a study group to get certified or to learn some new Java technology Teach a non-Java developer how to download the basic tools and where to find more information Download and use an open source project Improve the documentation Write an article or a blog post about the project Write an FAQ Join and participate on the mailing list Describe a bug in detail and submit a bug report Fix a bug and submit it to the project Give a talk about it at a JUG meeting Teach your co-workers how to use the project Sign up to Adopt a JSR Test regular builds of the Reference Implementation (RI) Report bugs in the RI Submit Feature Requests to the spec Triage issues on the issue tracker Run a hack day to discuss the API Moderate mailing lists and forums Create an FAQ or Wiki Evangelize a specification on Twitter, G+, Hacker News, etc Give a lightning talk Help build the RI Help build the Technical Compatibility Kit (TCK) Create a Podcast Learn Latin - e.g. legal language, translate to English Sign up to Adopt OpenJDK Run a Bugathon Fix javac compiler warnings Build virtual images Add tests to Java Submit Javadoc patches Give a webbing Teach someone to build OpenJDK Hold a brown bag session at work Fix the oldest known bug Overhaul Javadoc to use HTML Load the OpenJDK into different IDEs Run a build farm node Test your code on a nightly build Learn how to read Java byte code Visit JCP.org Follow jcp_org on Twitter Friend JCP on Facebook Read JCP Blog Register for JCP.org site Create a JSR Watch List Review JSRs in progress Comment on JSRs in progress, write and track bug reports, use cases, etc Review JSRs in Maintenance Comment on JSRs in Maintenance Implement Final JSRs Review the Transparency of JSRs in progress and provide feedback to the PMO and Spec Lead/community Become a JCP Member or associate with a current JCP member Nominate to serve on an Expert Group (EG) Serve on an EG Submit a JSR proposal and become Spec Lead Take a Spec Lead role in an Inactive or Dormant JSR Nominate for an Executive Committee (EC) seat Vote in the EC elections Vote in EC Special Elections Review EC Meeting Summaries Attend Spec Lead calls Write blogs, articles on your experiences Join the EC project on java.net Join JCP.Next on java.net/JSR 358 Participate on the JCP forums and join JSR projects on java.net Suggest agenda items for open EC meetings Attend public EC teleconference (2x per year) Attend open EC meetings at JavaOne Nominate for JCP Annual Awards Attend annual JavaOne and JCP Annual Awards Ceremony Attend JCP related BOF sessions and give your feedback to Program Office Invite JCP program office members to your JUG  or meetup Invite JSR Spec Leads to your JUG or meetup And always - hold a party!

    Read the article

  • Custom graph comparison?

    - by user57828
    I'm trying to compare two graphs using hash value ( i.e, at the time of comparison, try to avoid traversing the graph ) Is there a way to make a function such that the hash values compared can also lead to determining at which height the graphs differ? The comparisons between two graphs are to be made by comparing children at a certain level. One way to compare the graphs is have a final hash value for the root node and compare them, but that wouldn't directly reflect at which level the graphs differ, since their immediate children might be the same ( or any other case ).

    Read the article

  • in memory datastore in haskell

    - by Simon
    I want to implement an in memory datastore for a web service in Haskell. I want to run transactions in the stm monad. When I google hash table steam Haskell I only get this: Data. BTree. HashTable. STM. The module name and complexities suggest that this is implemented as a tree. I would think that an array would be more efficient for mutable hash tables. Is there a reason to avoid using an array for an STM hashtable? Do I gain anything with this stem hash table or should I just use a steam ref to an IntMap?

    Read the article

  • Creating a voxel world with 3D arrays using threads

    - by Sean M.
    I am making a voxel game (a bit like Minecraft) in C++(11), and I've come across an issue with creating a world efficiently. In my program, I have a World class, which holds a 3D array of Region class pointers. When I initialize the world, I give it a width, height, and depth so it knows how large of a world to create. Each Region is split up into a 32x32x32 area of blocks, so as you may guess, it takes a while to initialize the world once the world gets to be above 8x4x8 Regions. In order to alleviate this issue, I thought that using threads to generate different levels of the world concurrently would make it go faster. Having not used threads much before this, and being still relatively new to C++, I'm not entirely sure how to go about implementing one thread per level (level being a xz plane with a height of 1), when there is a variable number of levels. I tried this: for(int i = 0; i < height; i++) { std::thread th(std::bind(&World::load, this, width, height, depth)); th.join(); } Where load() just loads all Regions at height "height". But that executes the threads one at a time (which makes sense, looking back), and that of course takes as long as generating all Regions in one loop. I then tried: std::thread t1(std::bind(&World::load, this, w, h1, h2 - 1, d)); std::thread t2(std::bind(&World::load, this, w, h2, h3 - 1, d)); std::thread t3(std::bind(&World::load, this, w, h3, h4 - 1, d)); std::thread t4(std::bind(&World::load, this, w, h4, h - 1, d)); t1.join(); t2.join(); t3.join(); t4.join(); This works in that the world loads about 3-3.5 times faster, but this forces the height to be a multiple of 4, and it also gives the same exact VAO object to every single Region, which need individual VAOs in order to render properly. The VAO of each Region is set in the constructor, so I'm assuming that somehow the VAO number is not thread safe or something (again, unfamiliar with threads). So basically, my question is two one-part: How to I implement a variable number of threads that all execute at the same time, and force the main thread to wait for them using join() without stopping the other threads? How do I make the VAO objects thread safe, so when a bunch of Regions are being created at the same time across multiple threads, they don't all get the exact same VAO? Turns out it has to do with GL contexts not working across multiple threads. I moved the VAO/VBO creation back to the main thread. Fixed! Here is the code for block.h/.cpp, region.h/.cpp, and CVBObject.h/.cpp which controls VBOs and VAOs, in case you need it. If you need to see anything else just ask. EDIT: Also, I'd prefer not to have answers that are like "you should have used boost". I'm trying to do this without boost to get used to threads before moving onto other libraries.

    Read the article

  • Are HTTP requests cached? [closed]

    - by nischayn22
    Many HTTP requests are sent repeatedly by browsers on almost every page load, such as requesting the jQuery .js file etc. Since these are already used on too many sites doesn't modern browsers keep a cache for this? I am thinking of a system where the browser has a cached copy of the .js file used very very frequently. On a new request for the .js file, it sends the server a request for a hash of the .js file (provided the server can reply to that) and compares the returned hash with the cached copy's hash... rest is intuitive.

    Read the article

  • Data structure for file search

    - by poly
    I've asked this question before and I got a few answers/idea, but I'm not sure how to implement them. I'm building a telecom messaging solution. Currently, I'm using a database to save my transaction/messages for the network stack I've built, and as you know it's slower than using a data structure (hash, linkedlist, etc...). My problem is that the data can be really huge, and it won't fit in the memory. I was thinking of saving the records in a file and the a key and line number in a hash, then if I want to access some record then I can get the line number from the hash, and get it from the file. I don't know how efficient is this; I think the database is doing a way better job than this on my behalf. Please share whatever you have in mind.

    Read the article

  • JQuery Checkbox generation

    - by Hash
    I have used the following code to generate some dynamic checkboxes. This works for the first time and adds the check box chk2 to the page and then after the trigger for the $('#newLink').click is not working. Please help me with this. <div id="chkBoxesDiv"> <input type="checkbox" id="chk1" ></input> <input id="answerText" type="text" size="30" ></input>&nbsp; <input type="button" value="add new" id="newLink"/> </div> $(document).ready(function(){ $('#newLink').click(function (event){ var i = 0; //To count the children $("#chkBoxesDiv").children().each(function(){ var child = $(this); if(child.is(":checkbox")){ i++; } }); //prevent action event.preventDefault(); //get textbox value to fill checkbox var text = $("#answerText").val(); alert(i); //if text not empty do stuff if(text != ""){ //add label $("#chk"+i).after("<label for=\"chk"+i+"\" id=\"lblchk"+i+"\">"+text+"</label>"); $("#newLink").remove(); $("#answerText").remove(); $("#lblchk1").after("<br /><input type=\"checkbox\" id=\"chk"+(1+i)+"\" ></input><input type=\"text\" id=\"answerText\" size=\"30\" ></input>&nbsp;<input type=\"button\" value=\"add new\" id=\"newLink\"/>"); } }); });

    Read the article

  • Syntax Error with John Resig's Micro Templating after changing template tags <# {% {{ etc..

    - by optician
    I'm having a bit of trouble with John Resig's Micro templating. Can anyone help me with why it isn't working? This is the template <script type="text/html" id="row_tmpl"> test content {%=id%} {%=name%} </script> And the modified section of the engine str .replace(/[\r\t\n]/g, " ") .split("{%").join("\t") .replace(/((^|%>)[^\t]*)'/g, "$1\r") .replace(/\t=(.*?)%>/g, "',$1,'") .split("\t").join("');") .split("%}").join("p.push('") .split("\r").join("\\'") + "');}return p.join('');"); and the javascript var dataObject = { "id": "27", "name": "some more content" }; var html = tmpl("row_tmpl", dataObject); and the result, as you can see =id and =name seem to be in the wrong place? Apart from changing the template syntax blocks from <% % to {% %} I haven't changed anything. This is from Firefox. Error: syntax error Line: 30, Column: 89 Source Code: var p=[],print=function(){p.push.apply(p,arguments);};with(obj){p.push(' test content ');=idp.push(' ');=namep.push(' ');}return p.join('');

    Read the article

  • Using NHibernate's HQL to make a query with multiple inner joins

    - by Abu Dhabi
    The problem here consists of translating a statement written in LINQ to SQL syntax into the equivalent for NHibernate. The LINQ to SQL code looks like so: var whatevervar = from threads in context.THREADs join threadposts in context.THREADPOSTs on threads.thread_id equals threadposts.thread_id join posts1 in context.POSTs on threadposts.post_id equals posts1.post_id join users in context.USERs on posts1.user_id equals users.user_id orderby posts1.post_time where threads.thread_id == int.Parse(id) select new { threads.thread_topic, posts1.post_time, users.user_display_name, users.user_signature, users.user_avatar, posts1.post_body, posts1.post_topic }; It's essentially trying to grab a list of posts within a given forum thread. The best I've been able to come up with (with the help of the helpful users of this site) for NHibernate is: var whatevervar = session.CreateQuery("select t.Thread_topic, p.Post_time, " + "u.User_display_name, u.User_signature, " + "u.User_avatar, p.Post_body, p.Post_topic " + "from THREADPOST tp " + "inner join tp.Thread_ as t " + "inner join tp.Post_ as p " + "inner join p.User_ as u " + "where tp.Thread_ = :what") .SetParameter<THREAD>("what", threadid) .SetResultTransformer(Transformers.AliasToBean(typeof(MyDTO))) .List<MyDTO>(); But that doesn't parse well, complaining that the aliases for the joined tables are null references. MyDTO is a custom type for the output: public class MyDTO { public string thread_topic { get; set; } public DateTime post_time { get; set; } public string user_display_name { get; set; } public string user_signature { get; set; } public string user_avatar { get; set; } public string post_topic { get; set; } public string post_body { get; set; } } I'm out of ideas, and while doing this by direct SQL query is possible, I'd like to do it properly, without defeating the purpose of using an ORM. Thanks in advance!

    Read the article

  • MySql: Problem when using a temporary table

    - by Alex
    Hi, I'm trying to use a temporary tables to store some values I need for a query. The reason of using a temporary table is that I don't want to store the data permanently so different users can modify it at the same time. That data is just stored for a second, so I think a temporary table is the best approach for this. The thing is that it seems that the way I'm trying to use it is not right (the query works if I use a permanent one). This is an example of query: CREATE TEMPORARY TABLE SearchMatches (PatternID int not null primary key, Matches int not null) INSERT INTO SearchMatches (PatternID, Matches) VALUES ('12605','1'),('12503','1'),('12587','2'),('12456','1'), ('12457','2'),('12486','2'),('12704','1'),(' 12686','1'), ('12531','2'),('12549','1'),('12604','1'),('12504','1'), ('12586','1'),('12548','1'),('12 530','1'),('12687','2'), ('12485','1'),('12705','1') SELECT pat.id, signatures.signature, products.product, versions.version, builds.build, pat.log_file, sig_types.sig_type, pat.notes, pat.kb FROM patterns AS pat INNER JOIN signatures ON pat.signature = signatures.id INNER JOIN products ON pat.product = products.id INNER JOIN versions ON pat.version = versions.id INNER JOIN builds ON pat.build = builds.id INNER JOIN sig_types ON pat.sig_type = sig_types.id, SearchMatches AS sm INNER JOIN patterns ON patterns.id = sm.PatternID WHERE sm.Matches <> 0 ORDER BY sm.Matches DESC, products.product, versions.version, builds.build LIMIT 0 , 50 Any suggestion? Thanks.

    Read the article

  • MySQL MyISAM table performance... painfully, painfully slow

    - by Salman A
    I've got a table structure that can be summarized as follows: pagegroup * pagegroupid * name has 3600 rows page * pageid * pagegroupid * data references pagegroup; has 10000 rows; can have anything between 1-700 rows per pagegroup; the data column is of type mediumtext and the column contains 100k - 200kbytes data per row userdata * userdataid * pageid * column1 * column2 * column9 references page; has about 300,000 rows; can have about 1-50 rows per page The above structure is pretty straight forwad, the problem is that that a join from userdata to page group is terribly, terribly slow even though I have indexed all columns that should be indexed. The time needed to run a query for such a join (userdata inner_join page inner_join pagegroup) exceeds 3 minutes. This is terribly slow considering the fact that I am not selecting the data column at all. Example of the query that takes too long: SELECT userdata.column1, pagegroup.name FROM userdata INNER JOIN page USING( pageid ) INNER JOIN pagegroup USING( pagegroupid ) Please help by explaining why does it take so long and what can i do to make it faster. Edit #1 Explain returns following gibberish: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE userdata ALL pageid 372420 1 SIMPLE page eq_ref PRIMARY,pagegroupid PRIMARY 4 topsecret.userdata.pageid 1 1 SIMPLE pagegroup eq_ref PRIMARY PRIMARY 4 topsecret.page.pagegroupid 1 Edit #2 SELECT u.field2, p.pageid FROM userdata u INNER JOIN page p ON u.pageid = p.pageid; /* 0.07 sec execution, 6.05 sec fecth */ id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE u ALL pageid 372420 1 SIMPLE p eq_ref PRIMARY PRIMARY 4 topsecret.u.pageid 1 Using index SELECT p.pageid, g.pagegroupid FROM page p INNER JOIN pagegroup g ON p.pagegroupid = g.pagegroupid; /* 9.37 sec execution, 60.0 sec fetch */ id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE g index PRIMARY PRIMARY 4 3646 Using index 1 SIMPLE p ref pagegroupid pagegroupid 5 topsecret.g.pagegroupid 3 Using where Moral of the story Keep medium/long text columns in a separate table if you run into performance problems such as this one.

    Read the article

  • Syntax Error with John Resig's Micro Templating.

    - by optician
    I'm having a bit of trouble with John Resig's Micro templating. Can anyone help me with why it isn't working? This is the template <script type="text/html" id="row_tmpl"> test content {%=id%} {%=name%} </script> And the modified section of the engine str .replace(/[\r\t\n]/g, " ") .split("{%").join("\t") .replace(/((^|%>)[^\t]*)'/g, "$1\r") .replace(/\t=(.*?)%>/g, "',$1,'") .split("\t").join("');") .split("%}").join("p.push('") .split("\r").join("\\'") + "');}return p.join('');"); and the javascript var dataObject = { "id": "27", "name": "some more content" }; var html = tmpl("row_tmpl", dataObject); and the result, as you can see =id and =name seem to be in the wrong place? Apart from changing the template syntax blocks from <% % to {% %} I haven't changed anything. This is from Firefox. Error: syntax error Line: 30, Column: 89 Source Code: var p=[],print=function(){p.push.apply(p,arguments);};with(obj){p.push(' test content ');=idp.push(' ');=namep.push(' ');}return p.join('');

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >