Search Results

Search found 5124 results on 205 pages for 'week'.

Page 180/205 | < Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >

  • Optimizing T-SQL where an array would be nice

    - by Polatrite
    Alright, first you'll need to grab a barf bag. I've been tasked with optimizing several old stored procedures in our database. This SP does the following: 1) cursor loops through a series of "buildings" 2) cursor loops through a week, Sunday-Saturday 3) has a huge set of IF blocks that are responsible for counting how many Objects of what Types are present in a given building Essentially what you'll see in this code block is that, if there are 5 objects of type #2, it will increment @Type_2_Objects_5 by 1. IF @Number_Type_1_Objects = 0 BEGIN SET @Type_1_Objects_0 = @Type_1_Objects_0 + 1 END IF @Number_Type_1_Objects = 1 BEGIN SET @Type_1_Objects_1 = @Type_1_Objects_1 + 1 END IF @Number_Type_1_Objects = 2 BEGIN SET @Type_1_Objects_2 = @Type_1_Objects_2 + 1 END IF @Number_Type_1_Objects = 3 BEGIN SET @Type_1_Objects_3 = @Type_1_Objects_3 + 1 END [... Objects_4 through Objects_20 for Type_1] IF @Number_Type_2_Objects = 0 BEGIN SET @Type_2_Objects_0 = @Type_2_Objects_0 + 1 END IF @Number_Type_2_Objects = 1 BEGIN SET @Type_2_Objects_1 = @Type_2_Objects_1 + 1 END IF @Number_Type_2_Objects = 2 BEGIN SET @Type_2_Objects_2 = @Type_2_Objects_2 + 1 END IF @Number_Type_2_Objects = 3 BEGIN SET @Type_2_Objects_3 = @Type_2_Objects_3 + 1 END [... Objects_4 through Objects_20 for Type_2] In addition to being extremely hacky (and limited to a quantity of 20 objects), it seems like a terrible way of handling this. In a traditional language, this could easily be solved with a 2-dimensional array... objects[type][quantity] += 1; I'm a T-SQL novice, but since writing stored procedures often uses a lot of temporary tables (which could essentially be a 2-dimensional array) I was wondering if someone could illuminate a better way of handling a situation like this with two dynamic pieces of data to store. Requested in comments: The columns are simply Number_Type_1_Objects, Number_Type_2_Objects, Number_Type_3_Objects, Number_Type_4_Objects, Number_Type_5_Objects, and CurrentDateTime. Each row in the table represents 5 minutes. The expected output is to figure out what percentage of time a given quantity of objects is present throughout each day. Sunday - Object Type 1 0 objects - 69 rows, 5:45, 34.85% 1 object - 85 rows, 7:05, 42.93% 2 objects - 44 rows, 3:40, 22.22% On Sunday, there were 0 objects of type 1 for 34.85% of the day. There was 1 object for 42.93% of the day, and 2 objects for 22.22% of the day. Repeat for each object type.

    Read the article

  • Being pressured to GOTO the dark-side

    - by Dan McG
    We have a situation at work where developers working on a legacy (core) system are being pressured into using GOTO statements when adding new features into existing code that is already infected with spagetti code. Now, I understand there may be arguments for using 'just one little GOTO' instead of spending the time on refactoring to a more maintainable solution. The issue is, this isolated 'just one little GOTO' isn't so isolated. At least once every week or so there is a new 'one little GOTO' to add. This codebase is already a horror to work with due to code dating back to or before 1984 being riddled with GOTOs that would make many Pastafarians believe it was inspired by the Flying Spagetti Monster itself. Unfortunately the language this is written in doesn't have any ready made refactoring tools, so it makes it harder to push the 'Refactor to increase productivity later' because short-term wins are the only wins paid attention to here... Has anyone else experienced this issue whereby everybody agrees that we cannot be adding new GOTOs to jump 2000 lines to a random section, but continually have Anaylsts insist on doing it just this one time and having management approve it? tldr; How can one go about addressing the issue of developers being pressured (forced) to continually add GOTO statements (by add, I mean add to jump to random sections many lines away) because it 'gets that feature in quicker'? I'm beginning to fear we may loses valuable developers to the raptors over this...

    Read the article

  • psqlODBC won't load after installing MS SQL ODBC driver on RHEL 6

    - by Kapil Vyas
    I had the PostgreSQL drivers working on my RHEL 6. But after I installed Microsoft® SQL Server® ODBC Driver 1.0 for Linux I can no longer connect to PosgreSQL data sources. I can connect to SQL Server data sources fine. When I had this same issue a week ago I uninstalled MS SQL Server ODBC driver from Linux and it fixed the issue. I had to copy the psqlodbcw.so files from another machine to replenish the files. I don't want to do the same this time. I want both drivers to work on Linux. This time around the setup files got deleted: /usr/lib64/libodbcpsqlS.so. Replenishing it did not fix the issue. I kept getting the following error in spite of the file being present with rwx permisions: [root@localhost lib64]# isql -v STUDENT dsname pwd12345 [01000][unixODBC][Driver Manager]Can't open lib '/usr/lib64/psqlodbc.so' : file not found [ISQL]ERROR: Could not SQLConnect [root@localhost lib64]# Here is a printout of the file permissions: [root@localhost lib64]# ls -al p*.so lrwxrwxrwx. 1 root root 12 Dec 7 09:15 psqlodbc.so -> psqlodbcw.so -rwxr-xr-x. 1 root root 519496 Dec 7 09:35 psqlodbcw.so and my odbcinst.ini file looks as follows: [PostgreSQL] Description=ODBC for PostgreSQL Driver=/usr/lib/psqlodbc.so Driver64=/usr/lib64/psqlodbc.so Setup=/usr/lib/libodbcpsqlS.so Setup64=/usr/lib64/libodbcpsqlS.so FileUsage=1 UsageCount=4 I also referred to this link: http://mailman.unixodbc.org/pipermail/unixodbc-support/2010-September.txt

    Read the article

  • Qt moc not error

    - by Robert Parker
    So I'm pretty new to Qt, and I've just inherited a project from someone else who is also new to Qt. He isn't around this week btw. We are using Visual Studio 2008, and have the latest version of Qt installed(4.6.2). The project builds on my coworker's machine fine, and I can get the project from svn and build it directly. But under any other circumstances it refuses to build on my machine, and it doesn't give me much of an explanation why. Even if I just do a 'build clean' and then a 'build' it doesn't work. Any slight modification will make it fail. When I try to build the entire project I get the error message: 1Moc'ing MatrixTypeInterface.h... 1moc: Cannot create .\GeneratedFiles\Debug\moc_MatrixTypeInterface.cpp;.\GeneratedFiles\Debug\moc_matrixtypeinterface.cpp 1Project : error PRJ0019: A tool returned an error code from "Moc'ing MatrixTypeInterface.h..." The moc tool doesn't give any sort of error message as to why it isn't working, and I wasted most of yesterday trying to figure out why. I got the command that VS was using to call moc, and I entered in the command line myself. It didn't write anything to the screen. Any ideas?

    Read the article

  • Is it possible, in a django template, to check if an object is contained in a list

    - by AlexH
    I'm very new to django, about a week into it. I'm making a site where users enter stuff, then other users can vote on whether they like the stuff or not. I know it's not so novel, but it's a good project to learn a bunch of tools. I have a many-to-many table for storing who likes or dislikes what. Before I render the page, I pull out all the likes and dislikes for the current user, along with the stuff I'm going to show on the page. When I render the page, I go through the list of stuff I'm going to show and print them out one at a time. I want to show the user which stuff they liked, and which they didn't. So in my django template, I have an object called entry. I also have two lists of objects called likes and dislikes. Is there any way to determine if entry is a member of either list, inside my django template. I think what I'm looking for is a filter where I can say something like {% if entry|in:likes %} or {% if likes|contains:entry %} I know I could add a method to my model and check for each entry individually, but that seems like it would be database intensive. Is there a better way to think about this problem?

    Read the article

  • Planning and coping with deadlines in SCRUM

    - by John
    From wikipedia: During each “sprint”, typically a two to four week period (with the length being decided by the team), the team creates a potentially shippable product increment (for example, working and tested software). The set of features that go into a sprint come from the product “backlog,” which is a prioritized set of high level requirements of work to be done. Which backlog items go into the sprint is determined during the sprint planning meeting. During this meeting, the Product Owner informs the team of the items in the product backlog that he or she wants completed. The team then determines how much of this they can commit to complete during the next sprint. During a sprint, no one is allowed to change the sprint backlog, which means that the requirements are frozen for that sprint. After a sprint is completed, the team demonstrates the use of the software. I was reading this and two questions immediately popped into my head: 1)If a sprint is only a couple of weeks, decided in a single meeting, how can you accurately plan what can be achieved? High-level tasks can't be estimated accurately in my experience, and can easily double what seems reasonable. As a developer, I hate being pushed into committing what I can deliver in the next month based on a set of customer requirements, this goes against everything I know about generating reliable estimates rather than having to roughly estimate and then double it! 2)Since the requirements are supposed to be locked and a deliverable product available at the end, what happens when something does take twice as long? What if this feature is only 1/2 done at the end of the sprint? The wiki article goes on to talk about Sprint planning, where things are broken down into much smaller tasks for estimation (<1 day) but this is after the Sprint features are already planned and the release agreed, isn't it? kind of like a salesman promising something without consulting the developers.

    Read the article

  • Good ways to earn income as a self employed developer

    - by nullptr
    I was just wondering if people could share their experiences and ideas about generating / earning income from a software product or service they have personally developed. To me this seems like a good way to earn a living while doing what we love (programming) and working on projects and problems which interest us. Ie, NOT boring bank or marketing software etc 9-5 all week... Some ideas I have are things like web 2.0 style sites (Facebook,Youtube,Twitter,Digg) etc etc... - These can be very very profitable as we all know but can take years to take off. Are there ways to survive until/if this does happen? Mobile applications. Iphone, Google Android and the new up coming Nintendo DS app store. These have good potential to make it easy to find a market for your application and make selling it easy. Shareware/PC software. A bit 80's and 90's and you kind of need to be a salesman/marketer to sell it but its the only other thing I can think of. Also im not talking about doing freelance work. Im only interested in idea's you can come up with and develop your self (not other peoples ideas or problems which are you are payed to develop). Things that a sole developer or at the most 2 developers could work on and have good potential for high returns on investment (in terms of time) would be great. PS, I wish I thought of stackoverflow!

    Read the article

  • Problem with Boost::Asio for C++

    - by Martin Lauridsen
    Hi there, For my bachelors thesis, I am implementing a distributed version of an algorithm for factoring large integers (finding the prime factorisation). This has applications in e.g. security of the RSA cryptosystem. My vision is, that clients (linux or windows) will download an application and compute some numbers (these are independant, thus suited for parallelization). The numbers (not found very often), will be sent to a master server, to collect these numbers. Once enough numbers have been collected by the master server, it will do the rest of the computation, which cannot be easily parallelized. Anyhow, to the technicalities. I was thinking to use Boost::Asio to do a socket client/server implementation, for the clients communication with the master server. Since I want to compile for both linux and windows, I thought windows would be as good a place to start as any. So I downloaded the Boost library and compiled it, as it said on the Boost Getting Started page: bootstrap .\bjam It all compiled just fine. Then I try to compile one of the tutorial examples, client.cpp, from Asio, found (here.. edit: cant post link because of restrictions). I am using the Visual C++ compiler from Microsoft Visual Studio 2008, like this: cl /EHsc /I D:\Downloads\boost_1_42_0 client.cpp But I get this error: /out:client.exe client.obj LINK : fatal error LNK1104: cannot open file 'libboost_system-vc90-mt-s-1_42.lib' Anyone have any idea what could be wrong, or how I could move forward? I have been trying pretty much all week, to get a simple client/server socket program for c++ working, but with no luck. Serious frustration kicking in. Thank you in advance.

    Read the article

  • Message Handlers and the WeakReference issue

    - by user1058647
    The following message Handler works fine receiving messages from my service... private Handler handler = new Handler() { public void handleMessage(Message message) { Object path = message.obj; if (message.arg1 == 5 && path != null) //5 means its a single mapleg to plot on the map { String myString = (String) message.obj; Gson gson = new Gson(); MapPlot mapleg = gson.fromJson(myString, MapPlot.class); myMapView.getOverlays().add(new DirectionPathOverlay(mapleg.fromPoint, mapleg.toPoint)); mc.animateTo(mapleg.toPoint); } else { if (message.arg1 == RESULT_OK && path != null) { Toast.makeText(PSActivity.this, "Service Started" + path.toString(), Toast.LENGTH_LONG).show(); } else { Toast.makeText(PSActivity.this,"Service error" + String.valueOf(message.arg1), Toast.LENGTH_LONG).show(); } } }; }; However, even though it tests out alright in the AVD (I'm feeding it a large KML file via DDMS) the "object path = message.obj;" line has a WARNING saying "this Handler class should be static else leaks might occur". But if I say "static Handler handler = new Handler()" it won't compile complaining that I "cannot make a static reference to a non-static field myMapView. If I can't make such references, I can't do anything useful. This led me into several hours of googling around on this issue and learning more about weakReferences than I ever wanted to know. The often found reccomendation I find is that I should replace... private Handler handler = new Handler() with static class handler extends Handler { private final WeakReference<PSActivity> mTarget; handler(PSActivity target) { mTarget = new WeakReference<PSActivity>(target); } But this won't compile still complaining that I can't make a static reference to a non-dtatic field. So, my question a week or to ago was "how can I write a message handler for android so my service can send data to my activity. Even though I have working code, the question still stands with the suffix "without leaking memory". Thanks, Gary

    Read the article

  • ActiveRecord Validations for Models with has_many, belongs_to associations and STI

    - by keruilin
    I have four models: User Award Badge GameWeek The associations are as follows: User has many awards. Award belongs to user. Badge has many awards. Award belongs to badge. User has many game_weeks. GameWeek belongs to user. GameWeek has many awards. Award belongs to game_week. Thus, user_id, badge_id and game_week_id are foreign keys in awards table. Badge implements an STI model. Let's just say it has the following subclasses: BadgeA and BadgeB. Some rules to note: The game_week_id fk can be nil for BadgeA, but can't be nil for BadgeB. Here are my questions: For BadgeA, how do I write a validation that it can only be awarded one time? That is, the user can't have more than one -- ever. For BadgeB, how do I write a validation that it can only be awarded one time per game week?

    Read the article

  • Check if date is allowed weekday in php?

    - by moogeek
    Hello! I'm stuck with a problem how to check if a specific date is within allowed weekdays array in php. For example, function dateIsAllowedWeekday($_date,$_allowed) { if ((isDate($_date)) && (($_allowed!="null") && ($_allowed!=null))){ $allowed_weekdays=json_decode($_allowed); $weekdays=array(); foreach($allowed_weekdays as $wd){ $weekday=date("l",date("w",strtotime($wd))); array_push($weekdays,$weekday); } if(in_array(date("l",strtotime($_date)),$weekdays)){return TRUE;} else {return FALSE;} } else {return FALSE;} } ///////////////////////////// $date="21.05.2010"; $wd="[0,1,2]"; if(dateIsAllowedWeekday($date,$wd)){echo "$date is within $wd weekday values!";} else{echo "$date isn't within $wd weekday values!"} I have input dates formatted as "d.m.Y" and an array returned from database with weekday numbers (formatted as 'Numeric representation of the day of the week') like [0,1,2] - (Sunday,Monday,Tuesday). The returned string from database can be "null", so i check it too. Them, the isDate function checks whether date is a date and it is ok. I want to check if my date, for example 21.05.2010 is an allowed weekday in this array. My function always returns TRUE and somehow weekday is always 'Thursday' and i don't know why... Is there any other ways to check this or what can be my error in the code above? thx

    Read the article

  • iPhone: Low memory crash...

    - by MacTouch
    Once again I'm hunting memory leaks and other crazy mistakes in my code. :) I have a cache with frequently used files (images, data records etc. with a TTL of about one week and a size limited cache (100MB)). There are sometimes more then 15000 files in a directory. On application exit the cache writes an control file with the current cache size along with other useful information. If the applications crashes for some reason (sh.. happens sometimes) I have in such case to calculate the size of all files on application start to make sure I know the cache size. My app crashes at this point because of low memory and I have no clue why. Memory leak detector does not show any leaks at all. I do not see any too. What's wrong with the code below? Is there any other fast way to calculate the total size of all files within a directory on iPhone? Maybe without to enumerate the whole contents of the directory? The code is executed on the main thread. NSUInteger result = 0; NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; NSDirectoryEnumerator *dirEnum = [[[NSFileManager defaultManager] enumeratorAtPath:path] retain]; int i = 0; while ([dirEnum nextObject]) { NSDictionary *attributes = [dirEnum fileAttributes]; NSNumber* fileSize = [attributes objectForKey:NSFileSize]; result += [fileSize unsignedIntValue]; if (++i % 500 == 0) { // I tried lower values too [pool drain]; } } [dirEnum release]; dirEnum = nil; [pool release]; pool = nil; Thanks, MacTouch

    Read the article

  • WCF + Azure = Nightmare!

    - by lsb
    Hi! I've spent the prior week trying to get a secure form of WCF to work on Azure, but all to no avail! My use case is pretty simple. I want to call a WCF endpoint in the cloud and pass messages to be queued for a Worker Role. Beyond that I want to limit access to pre-authrorized users, authenticated via username & password. I've tried to get this working with Transport, TransportWithMessageCredential and Message security but nothing seems to work. Indeed, I've worked through every example and snippet that I could find, most recently the "Service using binary HTTP binding with transport security and message credentials and Silverlight client" example on the http://code.msdn.microsoft.com/wcfazure page. I'm pretty sure that I'm being knocked down by small bugs and beta changes but the end result is that I'm totally stuck. This is a critical path item for me so any suggestions would be greatly appreciated. A complete working example or a walkthrough would be even better!

    Read the article

  • Reporting Services 2005 - Parameter reliant on cascading parameters

    - by sHr0oMaN
    Good day I have the following: In a SSRS 2005 report I have three report parameters: FinancialPeriodType ("Month" or "Week" in a DropDownList), FinancialPeriod (cascading DropDownList populated depending on first selection) and another parameter, OpeningBalance, of type float. The first two parameters are cascading i.e. the first parameter is used by the query populating the second's available values. This works fine. What I'm attemping to do is default the value of OpeningBalance to a value from a dataset populated by a stored procedure which takes in the first two parameters. However as soon as I select a value for the first parameter, I get the following error: An error occurred during report processing. The value for the report parameter 'OpeningBalance' is not valid for its type.' I've tried setting the default of the second parameter to be a meaningful default (something like 200901) as well as defaulting the second parameter in the SQL store procedure with no affect. Using SQL Profiler I've noticed that selecting a value for the first parameter doesn't even execute the SQL used to obtain available values for the second parameter.

    Read the article

  • Google Chrome Extension : Port: Could not establish connection. Receiving end does not exist

    - by tcornelis
    I have been looking for an answer for almost a week now, but having read all the stackoverflow items i can't seem to find a solution that is working for me. The error that i'm having is : Port: Could not establish connection. Receiving end does not exist. lastError:30 set lastError:30 dispatchOnDisconnect messaging:277 folder layout : img developer_icon.png js sidebar.js main.js jquery-2.0.3.js manifest.json my the manifest.json file looks something like this (it is version 2) :` "browser_action": { "default_icon": "./img/developer_icon.png" }, "content_scripts": [ { "matches": ["*://*/*"], "js": ["./js/sidebar.js"], "run_at": "document_end" } ], "background" : { "scripts" : ["./js/main.js","./js/jquery-2.0.3.js"] }, I want to handle the user clicking the extension icon so i could inject a sidebar in the existing website (because the extension i would like to develop requires that amount of space). So in main.js : chrome.browserAction.onClicked.addListener(function(tab) { chrome.tabs.getSelected(null, function(tab){ chrome.tabs.sendMessage( //Selected tab id tab.id, //Params inside a object data {callFunction: "toggleSidebar"}, //Optional callback function function(response) { console.log(response); } ); }); }); and in sidebar.js : chrome.runtime.onMessage.addListener(function(req,sender,sendResponse){ console.log("sidebar handling request"); toggleSidebar(); }); but i'm never able to see the console.log in my console because of the error. Does someone know what i did wrong? Thanks in advance!

    Read the article

  • Database for managing large volumes of (system) metrics

    - by symcbean
    Hi, I'm looking at building a system for managing and reporting stats on web page performance. I'll be collecting a lot more stats than are available in the standard log formats (approx 20 metrics) but compared to most types of database applications, the base data structure will be very simple. My problem is that I'll be accumulating a lot of data - in the region of 100,000 records (i.e. sets of metrics) per hour. Of course, resources are very limited! So that its possible to sensibly interact with the data, I'd need to consolidate each metric into one minute bins, broken down by URL, then for anything more than 1 day old, consolidated into 10 minute bins, then at 1 week, hourly bins. At the front end, I want to provide a view (prefereably as plots) of the last hour of data, with the facility for users to drill up/down through defined hierarchies of URLs (which do not always map directly to the hierarchy expressed in the path of the URL) and to view different time frames. Rather than coding all this myself and using a relational database, I was wondering if there were tools available which would facilitate both the management of the data and the reporting. I had a look at Mondrian however I can't see from the documentation I've looked at whether it's possible to drop the more granular information while maintaining the consolidated views of the data. RRDTool looks promising in terms of managing the data consolidation, but seems to be rather limited in terms of querying the dataset as a multi-dimensional/relational database. What else whould I be looking at?

    Read the article

  • calculate business days including holidays

    - by ran
    i need to calculate the business days between two dates. ex : we have holiday(in USA) on july4th. so if my dates are date1 = 07/03/2012 date2 = 07/06/2012 no of business days b/w these dates should be 1 since july4th is holiday. i have a below method to calclulate the business days which will only counts week ends but not holidays. is there any way to calculate holidays also....please help me on this. public static int getWorkingDaysBetweenTwoDates(Date startDate, Date endDate) { Calendar startCal; Calendar endCal; startCal = Calendar.getInstance(); startCal.setTime(startDate); endCal = Calendar.getInstance(); endCal.setTime(endDate); int workDays = 0; //Return 0 if start and end are the same if (startCal.getTimeInMillis() == endCal.getTimeInMillis()) { return 0; } if (startCal.getTimeInMillis() > endCal.getTimeInMillis()) { startCal.setTime(endDate); endCal.setTime(startDate); } do { startCal.add(Calendar.DAY_OF_MONTH, 1); if (startCal.get(Calendar.DAY_OF_WEEK) != Calendar.SATURDAY && startCal.get(Calendar.DAY_OF_WEEK) != Calendar.SUNDAY) { ++workDays; } } while (startCal.getTimeInMillis() < endCal.getTimeInMillis()); return workDays; }

    Read the article

  • Best Practice: Protecting Personally Identifiable Data in a ASP.NET / SQL Server 2008 Environment

    - by William
    Thanks to a SQL injection vulnerability found last week, some of my recommendations are being investigated at work. We recently re-did an application which stores personally identifiable information whose disclosure could lead to identity theft. While we read some of the data on a regular basis, the restricted data we only need a couple of times a year and then only two employees need it. I've read up on SQL Server 2008's encryption function, but I'm not convinced that's the route I want to go. My problem ultimately boils down to the fact that we're either using symmetric keys or assymetric keys encrypted by a symmetric key. Thus it seems like a SQL injection attack could lead to a data leak. I realize permissions should prevent that, permissions should also prevent the leaking in the first place. It seems to me the better method would be to asymmetrically encrypt the data in the web application. Then store the private key offline and have a fat client that they can run the few times a year they need to access the restricted data so the data could be decrypted on the client. This way, if the server get compromised, we don't leak old data although depending on what they do we may leak future data. I think the big disadvantage is this would require re-writing the web application and creating a new fat application (to pull the restricted data). Due to the recent problem, I can probably get the time allocated, so now would be the proper time to make the recommendation. Do you have a better suggestion? Which method would you recommend? More importantly why?

    Read the article

  • Extend legacy site with another server-side programming platform best practice

    - by Andrew Florko
    Company I work for have a site developed 6-8 years ago by a team that was enthusiastic enough to use their own private PHP-based CMS. I have to put dynamic data from one intranet company database on this site in one week: 2-3 pages. I contacted company site administrator and she showed me administrative part - CMS allows only to insert html blocks & manage site map (site is deployed on machine that is inside company & fully accessible & upgradeable). I'm not a PHP-guy & I don't want to dive into legacy hardly-who-ever-heard-about CMS engine I also don't want to contact developers team, 'cos I'm not sure they are still present and capable enough to extend this old days site and it'll take too much time anyway. I am about to deploy helper asp.net site on IIS with 2-3 pages required & refer helper site via iframe from present site. New pages will allow to download some dynamic content from present site also. Is it ok and what are the pitfalls with iframe approach?

    Read the article

  • Information about PTE's (Page Table Entries) in Windows

    - by Patrick
    In order to find more easily buffer overflows I am changing our custom memory allocator so that it allocates a full 4KB page instead of only the wanted number of bytes. Then I change the page protection and size so that if the caller writes before or after its allocated piece of memory, the application immediately crashes. Problem is that although I have enough memory, the application never starts up completely because it runs out of memory. This has two causes: since every allocation needs 4 KB, we probably reach the 2 GB limit very soon. This problem could be solved if I would make a 64-bit executable (didn't try it yet). even when I only need a few hundreds of megabytes, the allocations fail at a certain moment. The second problem is the biggest one, and I think it's related to the maximum number of PTE's (page table entries, which store information on how Virtual Memory is mapped to physical memory, and whether pages should be read-only or not) you can have in a process. My questions (or a cry-for-tips): Where can I find information about the maximum number of PTE's in a process? Is this different (higher) for 64-bit systems/applications or not? Can the number of PTE's be configured in the application or in Windows? Thanks, Patrick PS. note for those who will try to argument that you shouldn't write your own memory manager: My application is rather specific so I really want full control over memory management (can't give any more details) Last week we had a memory overwrite which we couldn't find using the standard C++ allocator and the debugging functionality of the C/C++ run time (it only said "block corrupt" minutes after the actual corruption") We also tried standard Windows utilities (like GFLAGS, ...) but they slowed down the application by a factor of 100, and couldn't find the exact position of the overwrite either We also tried the "Full Page Heap" functionality of Application Verifier, but then the application doesn't start up either (probably also running out of PTE's)

    Read the article

  • Can knowing C actually hurt the code you write in higher level languages?

    - by Jurily
    The question seems settled, beaten to death even. Smart people have said smart things on the subject. To be a really good programmer, you need to know C. Or do you? I was enlightened twice this week. The first one made me realize that my assumptions don't go further than my knowledge behind them, and given the complexity of software running on my machine, that's almost non-existent. But what really drove it home was this Slashdot comment: The end result is that I notice the many naive ways in which traditional C "bare metal" programmers assume that higher level languages are implemented. They make bad "optimization" decisions in projects they influence, because they have no idea how a compiler works or how different a good runtime system may be from the naive macro-assembler model they understand. Then it hit me: C is just one more abstraction, like all others. Even the CPU itself is only an abstraction! I've just never seen it break, because I don't have the tools to measure it. I'm confused. Has my mind been mutilated beyond recovery, like Dijkstra said about BASIC? Am I living in a constant state of premature optimization? Is there hope for me, now that I realized I know nothing about anything? Is there anything to know, even? And why is it so fascinating, that everything I've written in the last five years might have been fundamentally wrong? To sum it up: is there any value in knowing more than the API docs tell me? EDIT: Made CW. Of course this also means now you must post examples of the interpreter/runtime optimizing better than we do :)

    Read the article

  • C++ - Need to learn some basics in a short while

    - by Rubys
    For reasons I will spare you, I have two weeks to learn some C++. I can learn alone just fine, but I need a good source. I don't think I have time to go through an entire book, and so I need some cliff notes, or possibly specific chapters/specialized resources I need to look up. I know my Asm/C/C# well, and so anything inherited from C, or any OOP is not needed. What I do need is some sources on the following subjects(I have a page that specifies what is needed, this is basically it, but I trimmed what I know): new/delete in C++ (as opposed to C#). Overloading cin/cout. Constructor, Destructor and MIL. Embedded Objects. References. Templates. If you feel some basic C++ concept that is not shared with C/C# is not included on this list, feel free to enter those as well. But the above subjects are the ones I'm supposed to roughly know in two week's time. Any help would be appreciated, thanks.

    Read the article

  • Unit testing a method with many possible outcomes

    - by Cthulhu
    I've built a simple~ish method that constructs an URL out of approximately 5 parts: base address, port, path, 'action', and a set of parameters. Out of these, only the address part is mandatory, the other parts are all optional. A valid URL has to come out of the method for each permutation of input parameters, such as: address address port address port path address path address action address path action address port action address port path action address action params address path action params address port action params address port path action params andsoforth. The basic approach for this is to write one unit test for each of these possible outcomes, each unit test passing the address and any of the optional parameters to the method, and testing the outcome against the expected output. However, I wonder, is there a Better (tm) way to handle a case like this? Are there any (good) unit test patterns for this? (rant) I only now realize that I've learned to write unit tests a few years ago, but never really (feel like) I've advanced in the area, and that every unit test is a repeat of building parameters, expected outcome, filling mock objects, calling a method and testing the outcome against the expected outcome. I'm pretty sure this is the way to go in unit testing, but it gets kinda tedious, yanno. Advice on that matter is always welcome. (/rant) (note) christmas weekend approaching, probably won't reply to suggestions until next week. (/note)

    Read the article

  • What's the SQL function for timestamp manipulation?

    - by Eric Leroy
    I'm new to SQL and the time functions are different than mySQL so I'm having a terrible time finding a good site reference with USEFUL timestamp queries. I'm not able to locate the correct way of doing this query in SQL: Id Timestamp ----------------------------------- 1145744 2012-10-10 18:15:11.500 1145743 2012-10-10 18:15:11.313 1145742 2012-10-10 18:15:11.313 1145741 2012-10-10 18:15:11.253 1145740 2012-10-10 18:15:11.190 1145739 2012-10-10 18:15:11.190 1145738 2012-10-10 18:15:11.127 1145737 2012-10-10 18:15:11.067 1145736 2012-10-10 18:15:11.063 1145735 2012-10-10 18:15:10.940 1145734 2012-10-10 18:15:10.817 SELECT * from table WHERE Timestamp ... RANGE I need the range of 2 timestamps so I can select rows by the following parameters: second range minute range hour range day range week range month range year range Is there one function to put in 2 timestamps and get the range? or is this a mix of functions I need? Any good site references would be greatly appriceated. MSDN site isn't helping me isolate the proper way of doing this. I've been searching for about an hour trying to get the last day from 1:30PM to 1:30PM today.

    Read the article

  • DotNetOpenAuth getting return_To url as the provider

    - by Jeff
    I have a provider that I'm upgrading from version 2 to version 3. basically a user comes to one of our sites and we say "if you don't have an openID sign up with this one" They click that and are sent to our in house provider what signs them up for an account and has to verify their email address. In the link I send via Email I included the return to url . This way when they click the link to verify their email (sometimes up to a week later) they are sent to our page that knows to send them back to the returnTo to login. I could not find a good way to do this in version 2 so I used a hack. Sadly this hack broke in version 2 so I'm completely stumped as to how to find the login page url that is the return_To Note: I need the full URL not just the domain. old hack: if (ProviderEndpoint.PendingAuthenticationRequest != null) { req = ProviderEndpoint.PendingAuthenticationRequest.ToString(); int startpos = req.LastIndexOf("CheckIdRequest.ReturnTo = '"); startpos += "CheckIdRequest.ReturnTo = '".Length; int endpos = req.LastIndexOf("token=") - startpos-1; if (endpos > 0) { req = req.Substring(startpos, endpos); } else { log.Fatal("Missing token in url" + ProviderEndpoint.PendingAuthenticationRequest.ToString()); return "~/"; } req = req.Replace("&internalLogin=True", ""); req = req.Replace("?internalLogin=True&", "?"); req = req.Replace("?internalLogin=True", ""); }

    Read the article

< Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >