Search Results

Search found 24011 results on 961 pages for 'call me dummy'.

Page 191/961 | < Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • File open fails initially when trying to open a file located on a win2k8 share but eventually can su

    - by Ruddy Douglass
    Core of the problem: I receive "(0x80070002) The system cannot find the file specified" for roughly 8 to 9 seconds before it can open it successfully. In a nutshell, we have two COM components. Component A calls into Component B and asks for a UNC filename to write to - the filename returned doesn't exist yet, but the path does - it then does its work, creates and populates the file, and tells Component B its done by making another com call. At that point Component B will call MoveFile to rename the file to its "official" name. This code has worked for (literally) years. Its works fine on win2k3. Its works fine when its running on win2k8 and points to a share on a win2k3 server. It also runs fine, if the share is actually located on the same win2k8 machine that the code is running on. But if you run it on win2k8 and point it to a share on a different win2k8 server, it fails. Both Component A & component B exist in their own Windows Service running with as a domain admin account. The shares are configured for "Everyone/Full control" in all test environments, similarly so are the underlying folders that the share points to. All machines are in the same domain. During debugging I realized the file actually does exist by the time I get to checking manually for it - after several iterations it occurred to me that the file doesn't "show up" until some delay passes by - so I put in the loop below in component B as shown below: int nCounter = 0; while (true) { CFileStatus fs; if (CFile::GetStatus(tempname, fs)) break; SleepEx(100, FALSE); nCounter++; } This code does, in fact, exit and nCounter is generally between 80 & 90 iterations when it does -- indicating the file "appears" approx 8 to 9 seconds later. Once that loop exits the code can successfully rename the file and all further processing appears to work. I put a CFile::GetStatus in component A immediately before it calls into Component B and that indicates success - it can see the file and get its true size yet the call into component b made immediately after can not see the file until the above indicated delay passes. I have verified the pathnames are precisely the same, even though it would clearly have to be for the calls to eventually succeed after a pause of 8 to 9 seconds... When something like this occurs I always assume there is a bug in my code until proven otherwise, but given (a) this code has executed properly for a very long time and (other than my diagnostic loop added) has not changed, and (b) it works in all environments except the win2k8 - win2k8 share, I'm guessing there is some OS issue in here that i do not understand. Any insight would be helpful. Thanks!

    Read the article

  • Partner WebCasts: EMEA Alliances and Channels Hardware Webinars, July 2012

    - by rituchhibber
    Dear partner Oracle is pleased to invite you to the following webcasts dedicated to our EMEA partner community and designed to provide you with important news on our SPARC and Storage product portfolios. Please ensure you don't miss these unique learning opportunities! 1. How to Make Money Selling SPARC! 3PM CET (2pm UKT), Tuesday, July 10, 2012 The webcast will be hosted by - Rob Ludeman, from SPARC Product Management, and Thomas Ressler, WWA&C Alliances Consultant. Agenda: To bring our partners timely, valuable information, focused on increase in their success during selling SPARC systems. The webcast will be focused and targeted on specific topics and will last approximately in 30 minutes.You can submit your questions via WebEx chat and there will be a live Q&A session at the end of the webcast. REGISTER NOW 2. Introduction to Oracle’s New StorageTek SL150 Modular Tape Library 3pm CET (2pm UK), Thursday, July 12, 2012 This webcast will help you to understand Oracle's New StorageTek SL150 Modular tape library which is the first scalable tape library designed for small and midsized companies that are experiencing high growth. Built from Oracle software and StorageTek library technology, it delivers a cost-effective combination of ease of use and scalability, resulting in overall TCO savings. During the webcast Cindy McCurley, from Tape Product Management will introduce you to the latest addition to the Oracle Tape Storage product portfolio, the SL150 Modular Tape Library. This 60 minutes webcast will cover the product’s features, positioning, unique selling points and a competitive overview on StorageTek. You can submit your questions via WebEx chat and there will be a live Q&A session at the end of the webcast. REGISTER NOW Delivery Format This FREE online LIVE eSeminar will be delivered over the Web and Conference Call. Note: Please join the call 10 minutes before the scheduled start time. We look forward to your participation. Best regards, Giuseppe Facchetti EMEA Partner Business Development Manager, Oracle Hardware Sales Sasan Moaveni EMEA Storage Sales Manager, Oracle Hardware Sales

    Read the article

  • Is it safe to use S3 over HTTP from EC2, as opposed to HTTPS

    - by Marc
    I found that there is a fair deal of overhead when uploading a lot of small files to S3. Some of this overhead comes from SSL itself. How safe is it to talk to S3 without SSL when running in EC2? From the awesome comments below, here are some clarifications: this is NOT a question about HTTPS versus HTTP or the sensitivity of my data. I'm trying to get a feeling for the networking and protocol particularities of EC2 and S3. For example Are we guaranteed to be passing through only the AWS network when communicating from EC2 to S3 Can other AWS users (apart from staff) sniff my communications between EC2 and S3 Is authentication on their api done on every call, and thus credentials are passed on every call? Or is there some kind of authenticated session. I am using the jets3t lib. Feedback from people with some AWS experience would be appreciated. Thanks Marc

    Read the article

  • Consistency of an object

    - by Stefano Borini
    I tend to keep my objects consistent during their lifetime. In some cases, setting up an object requires multiple calls to different routines. For example, a connection object may operate in this way: Connection c = new Connection(); c.setHost("http://whatever") c.setPort(8080) c.connect() please note this is just a stupid example to let you understand the point. In between calls to setHost and setPort the object is inconsistent, because the Port has not been specified yet, so this code would crash Connection c = new Connection(); c.setHost("http://whatever") c.connect() Meaning that it's a requisite for connect() to have previous calls to both setHost and setPort, otherwise it won't be able to operate as its state is inconsistent. You may fix the issue with a default value, but there may be cases where no sensible default may be devised. We assume in the later example that there's no default for the port, and therefore a call to c.connect() without first calling both setHost and setPort will be an inconsistent state of the object. This, to me, points at an incorrect interface design, but I may be wrong, so I want to hear your opinion. Do you organize your interface so that the object is always in a consistent (i.e. workable) state both before and after the call ? Edit: Please don't try to solve the problem I gave above. I know how to solve that. My question is much broader in sense. I am looking for a design principle, officially or informally stated, regarding consistency of object state between calls.

    Read the article

  • Connect to MS Sql 2008 on local VM

    - by Campo
    I have a test machine. Server 2008 with Hyper V MSSQL 2008 Enterprise Lets call it MACHINE A on the VM it is as well Server 2008 with another MSSQL 2008 Ent Call it VM B I setup a DB on MACHINE A then backed it up and restored following the prepare database for mirroring instructions on MSDN onto VM B. I used to be able to connect to the VM B Instance from the main test server (MACHINE A) but now I cannot for some reason. It cannot seem to find the instance at all even when I browse network databases. I can ping the VM from any computer on the network and access its shares so I know it is discoverable. Just the end of a long day maybe I am missing something here.

    Read the article

  • High-Level Application Architecture Question

    - by Jesse Bunch
    So I'm really wanting to improve how I architect the software I code. I want to focus on maintainability and clean code. As you might guess, I've been reading a lot of resources on this topic and all it's doing is making it harder for me to settle on an architecture because I can never tell if my design is the one that the more experienced programmer would've chosen. So I have these requirements: I should connect to one vendor and download form submissions from their API. We'll call them the CompanyA. I should then map those submissions to a schema fit for submitting to another vendor for integration with the email service provider. We'll call them the CompanyB. I should then submit those responses to the ESP (CompanyB) and then instruct the ESP to send that submitter an email. So basically, I'm copying data from one web service to another and then performing an action at the latter web service. I've identified a couple high-level services: The service that downloads data from CompanyA. I called this the CompanyAIntegrator. The service that submits the data to CompanyB. I called this CompanyBIntegrator. So my questions are these: Is this a good design? I've tried to separate the concerns and am planning to use the facade pattern to make the integrators interchangeable if the vendors change in the future. Are my naming conventions accurate and meaningful to you (who knows nothing specific of the project)? Now that I have these services, where should I do the work of taking output from the CompanyAIntegrator and getting it in the format for input to the CompanyBIntegrator? Is this OK to be done in main()? Do you have any general pointers on how you'd code something like this? I imagine this scenario is common to us engineers---especially those working in agencies. Thanks for any help you can give. Learning how to architect well is really mind-cluttering.

    Read the article

  • how do I write a functional specification quickly and efficiently

    - by giddy
    So I just read some fabulous articles by Joel on specs here. (Was written in 2000!!) I read all 4 parts, but Im looking for some methodical approaches to writing my specs. Im the only lonely dev, working on this fairly complicated app (or family of apps) for a very well known finance company. I've never made something this serious, I started out writing something like a bad spec, an overview of some sorts, and it has wasted a LOT of my time. Ive also made 3 mockup-kinda-thingies for my client so I have a good understanding of what they want. Also released a preview (a throw away working app with the most basic workflow), and Ive only written and tested some of the very core/base systems. I think the mistake Ive been making so far is not writing a detailed spec, so Im getting to it now. So the whole thing comprises of An MVC website (for admins & data viewing) 2 Silverlight modules (For 2 specific tasks) 1 Desktop Application Im totally short on time, resources and need to get this done quick, also, need to make sure these guys read it up equally quick and painlessly. So how do I go about it, Im looking for any tips, any real world stuff, how do you guys usually do it? Do you make a mock screenie of every dialog/form/page? Im thinking of making a dummy asp.net web forms project, then filling in html files in folders and making it look like my mvc url structure. Then having a section in the spec for the website and write up a page for every URL Ive got with a screenie. For my win forms app, Ive made somewhat of a demo Win Form project, would I then put in a dialog or stucture everything as I would in the real app and then screen shot it?

    Read the article

  • How to improve batching performance

    - by user4241
    Hello, I am developing a sprite based 2D game for mobile platform(s) and I'm using OpenGL (well, actually Irrlicht) to render graphics. First I implemented sprite rendering in a simple way: every game object is rendered as a quad with its own GPU draw call, meaning that if I had 200 game objects, I made 200 draw calls per frame. Of course this was a bad choice and my game was completely CPU bound because there is a little CPU overhead assosiacted in every GPU draw call. GPU stayed idle most of the time. Now, I thought I could improve performance by collecting objects into large batches and rendering these batches with only a few draw calls. I implemented batching (so that every game object sharing the same texture is rendered in same batch) and thought that my problems are gone... only to find out that my frame rate was even lower than before. Why? Well, I have 200 (or more) game objects, and they are updated 60 times per second. Every frame I have to recalculate new position (translation and rotation) for vertices in CPU (GPU on mobile platforms does not support instancing so I can't do it there), and doing this calculation 48000 per second (200*60*4 since every sprite has 4 vertices) simply seems to be too slow. What I could do to improve performance? All game objects are moving/rotating (almost) every frame so I really have to recalculate vertex positions. Only optimization that I could think of is a look-up table for rotations so that I wouldn't have to calculate them. Would point sprites help? Any nasty hacks? Anything else? Thanks.

    Read the article

  • PARTNER WEBCASTS: EMEA ALLIANCES AND CHANNELS HARDWARE WEBINARS, JULY 2012

    - by mseika
    PARTNER WEBCASTS: EMEA ALLIANCES AND CHANNELS HARDWARE WEBINARS, JULY 2012Dear partner Oracle is pleased to invite you to the following webcasts dedicated to our EMEA partner community and designed to provide you with important news on our SPARC and Storage product portfolios. Please ensure you don't miss these unique learning opportunities! 1. How to Make Money Selling SPARC!3PM CET (2pm UKT), Tuesday, July 10, 2012 The webcast will be hosted by - Rob Ludeman, from SPARC Product Management, and Thomas Ressler, WWA&C Alliances Consultant. Agenda: To bring our partners timely, valuable information, focused on increase in their success during selling SPARC systems. The webcast will be focused and targeted on specific topics and will last approximately in 30 minutes.You can submit your questions via WebEx chat and there will be a live Q&A session at the end of the webcast. REGISTER NOW 2. Introduction to Oracle’s New StorageTek SL150 Modular Tape Library3pm CET (2pm UK), Thursday, July 12, 2012 This webcast will help you to understand Oracle's New StorageTek SL150 Modular tape library which is the first scalable tape library designed for small and midsized companies that are experiencing high growth. Built from Oracle software and StorageTek library technology, it delivers a cost-effective combination of ease of use and scalability, resulting in overall TCO savings. During the webcast Cindy McCurley, from Tape Product Management will introduce you to the latest addition to the Oracle Tape Storage product portfolio, theSL150 Modular Tape Library.This 60 minutes webcast will cover the product’s features, positioning, unique selling points and a competitive overview on StorageTek. You can submit your questions via WebEx chat and there will be a live Q&A session at the end of the webcast. REGISTER NOW Delivery FormatThis FREE online LIVE eSeminar will be delivered over the Web and Conference Call. Note: Please join the call 10 minutes before the scheduled start time. We look forward to your participation. Best regards, Giuseppe FacchettiEMEA Partner Business Development Manager, Oracle Hardware Sales Sasan MoaveniEMEA Storage Sales Manager, Oracle Hardware Sales

    Read the article

  • PARTNER WEBCASTS: EMEA ALLIANCES AND CHANNELS HARDWARE WEBINARS, JULY 2012

    - by mseika
    PARTNER WEBCASTS: EMEA ALLIANCES AND CHANNELS HARDWARE WEBINARS, JULY 2012Dear partner Oracle is pleased to invite you to the following webcasts dedicated to our EMEA partner community and designed to provide you with important news on our SPARC and Storage product portfolios. Please ensure you don't miss these unique learning opportunities! 1. How to Make Money Selling SPARC!3PM CET (2pm UKT), Tuesday, July 10, 2012 The webcast will be hosted by - Rob Ludeman, from SPARC Product Management, and Thomas Ressler, WWA&C Alliances Consultant. Agenda: To bring our partners timely, valuable information, focused on increase in their success during selling SPARC systems. The webcast will be focused and targeted on specific topics and will last approximately in 30 minutes.You can submit your questions via WebEx chat and there will be a live Q&A session at the end of the webcast. REGISTER NOW 2. Introduction to Oracle’s New StorageTek SL150 Modular Tape Library3pm CET (2pm UK), Thursday, July 12, 2012 This webcast will help you to understand Oracle's New StorageTek SL150 Modular tape library which is the first scalable tape library designed for small and midsized companies that are experiencing high growth. Built from Oracle software and StorageTek library technology, it delivers a cost-effective combination of ease of use and scalability, resulting in overall TCO savings. During the webcast Cindy McCurley, from Tape Product Management will introduce you to the latest addition to the Oracle Tape Storage product portfolio, theSL150 Modular Tape Library.This 60 minutes webcast will cover the product’s features, positioning, unique selling points and a competitive overview on StorageTek. You can submit your questions via WebEx chat and there will be a live Q&A session at the end of the webcast. REGISTER NOW Delivery FormatThis FREE online LIVE eSeminar will be delivered over the Web and Conference Call. Note: Please join the call 10 minutes before the scheduled start time. We look forward to your participation. Best regards, Giuseppe FacchettiEMEA Partner Business Development Manager, Oracle Hardware Sales Sasan MoaveniEMEA Storage Sales Manager, Oracle Hardware Sales

    Read the article

  • Windows Handling Piped Comands Error Redirection

    - by jpmartins
    Warning: I am no expert on building scripts, and sorry for lousy English. In an case of generating a CSV from a database query I'm using the following commands. ... CALL java.exe -classpath ... com.xigole.util.sql.Jisql -user dmfodbc -pf pwd.file -driver com.sybase.jdbc3.jdbc.SybDriver -cstring %constr% -c ; -input 42.sql -formatter csv -delimiter ; 2%LOGFILE% | CALL grep -v -e "SELECT right" -e "executing: " -e " rows affect" %FicheiroR% 2%LOGFILE% ... I'm using windows implementation of grep. The 2%LOGFILE% in both java and grep command is causing an error message indicating the file is being use by another process. The Ugly workaround i have came up with is to put grep error redirect to a temporary %LOGFILE%.aux java ... | grep ... 2%LOGFILE%.aux type %LOGFILE%.aux % %LOGFILE% del %LOGFILE%.aux What is a better solution?

    Read the article

  • Can I find which script outputs which error?

    - by ibrahim
    There is a script which calls other scripts and they call others... I don't know exactly which scripts are called and how many of them. I only know that some of them are adding iptables rules and I get this error when I call root script. iptables: No chain/target/match by that name. iptables: No chain/target/match by that name. My problem is that I can not find which script outputs this errors. Is there any way or tool to learn that?

    Read the article

  • Contents farms, scrapers sites, aggregators real world examples? [closed]

    - by Marco Demaio
    Contents farm, scrappers, aggregators real world examples? Could you plz clarify me: efreedom.com is a scraper site, not a content farm? Because it simply copies and pastes contents from stackoverflow. ehow.com and squidoo.com are contents farm? They don't copy and paste contents they just generate fresh new user generated content, but too much and too quickly. expert-exchange.com is NOT a content farm or a scraper site, right?! It's simply that many people (an me too) hates it (they also wrote to Matt Cutts) because it shows up hight in Google providing a useless question with no answer. There are also many sites that act as 'contents aggregators in the form of specialized directories' (let's call them CASD), I don't know how to else define them. Do they have a specific definition? Anyway are these type of CASD contents farms or scrapers sites or what else? Basically these CASD search for all sites of the same type i.e. “restaurants websites”, they copy and paste the contents found in “Restaurant A” and create in their aggregator site a new page called “Restaurant A”, then they do the same for all websites of the same type, thus creating a sort of directory of restaurants. Later on these CASD also sends an email to the owner of “Restaurant A” (usually the email is on the website) with a user and password to let him modify/update its own page on the CASD site. Later on these CASD might ask for money to the owner of “Restaurant A” because they bring him traffic, otherwise they remove its page on the aggregator. Someone could call these simply directories, but I think a directory is different because is something you need to add your site into by filling a form and not something that steals contents from your existing site without a specific acceptance from the site's owner. I also really wonder how Google will sort out all these mess sites packed of contents that show up more and more and everywhere in search results.

    Read the article

  • How to run boot loader in VMWare?

    - by Asim Haroon
    I am using Ubuntu as a virtual machine in VMWare. I have used this code to write a boot loader which would write Hello world on the screen. [BITS 16] [ORG 0x7C00] MOV SI, HelloString CALL PrintString JMP $ PrintCharacter: MOV AH, 0x0E MOV BH, 0x00 MOV BL, 0x07 INT 0x10 RET PrintString: next_character: MOV AL, [SI] INC SI OR AL, AL JZ exit_function CALL PrintCharacter JMP next_character exit_function: RET HelloString db 'Hello World', 0 TIMES 510 - ($ - $$) db 0 DW 0xAA55 I wrote this code in the text editor in Ubuntu and saved the file as Boot.asm Then I compiled the Boot.asm to boot.bin file by using this command nasm -f bin -o boot.bin Boot.asm and it didn't gave me any errors. After that I copied the boot.bin file to my usb and took it to my Windows OS. After this I burned the boot.bin file to boot.img and boot.iso files. Then I created a new virtual machine and named it booter, when it asked for the .iso file of the OS I want to run I gave it the boot.iso file, about which I told above, then I powered on that virtual machine but it gave me this error PXE-M0F: No boot filename received PXE-M0F: Exiting Intel PXE ROM Operating System not found Please tell me what is the main problem and how can I overcome that problem.

    Read the article

  • WebSocket Samples in GlassFish 4 build 66 - javax.websocket.* package: TOTD #190

    - by arungupta
    This blog has published a few blogs on using JSR 356 Reference Implementation (Tyrus) integrated in GlassFish 4 promoted builds. TOTD #183: Getting Started with WebSocket in GlassFish TOTD #184: Logging WebSocket Frames using Chrome Developer Tools, Net-internals and Wireshark TOTD #185: Processing Text and Binary (Blob, ArrayBuffer, ArrayBufferView) Payload in WebSocket TOTD #186: Custom Text and Binary Payloads using WebSocket TOTD #189: Collaborative Whiteboard using WebSocket in GlassFish 4 The earlier blogs created a WebSocket endpoint as: import javax.net.websocket.annotations.WebSocketEndpoint;@WebSocketEndpoint("websocket")public class MyEndpoint { . . . Based upon the discussion in JSR 356 EG, the package names have changed to javax.websocket.*. So the updated endpoint definition will look like: import javax.websocket.WebSocketEndpoint;@WebSocketEndpoint("websocket")public class MyEndpoint { . . . The POM dependency is: <dependency> <groupId>javax.websocket</groupId> <artifactId>javax.websocket-api</artifactId> <version>1.0-b09</version> </dependency> And if you are using GlassFish 4 build 66, then you also need to provide a dummy EndpointFactory implementation as: import javax.websocket.WebSocketEndpoint;@WebSocketEndpoint(value="websocket", factory=MyEndpoint.DummyEndpointFactory.class)public class MyEndpoint { . . .   class DummyEndpointFactory implements EndpointFactory {    @Override public Object createEndpoint() { return null; }  }} This is only interim and will be cleaned up in subsequent builds. But I've seen couple of complaints about this already and so this deserves a short blog. Have you been tracking the latest Java EE 7 implementations in GlassFish 4 promoted builds ?

    Read the article

  • Can AJAX in a CMS slow down your server

    - by Saif Bechan
    I am currently developing some plugins for WordPress, and I was wondering which route to take. Let's take an example, you want to display the last 3 tweets on your page. Option 1 You do things the normal way inside WordPress. Someone enters the website, while generating the page, you fetch the tweets in php via the twitter api, and just display them where you want. Now the small problem with this is, that you have to wait for the response from twitter. This takes a few ms. NO real problem, but this is question is just out of curiosity. Option 2 Here you don't do anything in WordPress on the initial load, but you do have the API inside. Now you just generate the page, and as soon as the page is done on the client side, you do a small AJAX call back to the server via a WordPress plugin, to fetch your latest tweets. Also called asynchronously. Now the problem with this IMO is that you have much more stress on your server. For starters you have two HTTP requests instead of one. Secondly the WordPress core has to load two times instead of one. Other options Now I know there are a lot of other options: 1) Getting the tweets directly via javascript, no stress on the server at all. 2) Cache the tweets so they are fetched from the DB instead of using the API every time. 3) Getting the tweets from an ajax call that is not a WordPress plugin. 4) Many more. My Question Now my question is if you only compare 1 and 2, which would be a better choice.

    Read the article

  • Advices fo starting a video game design career

    - by Allen Gabriel Baker
    I'm 24 and have a passion for video games and game-design. I've decided I want to design video games as my career. I have no experience with designing video games or coding but I'm interested and willing to learn. I want a job at any level but what would I need to land a job? I have no college experience and I have no money. What is a cheap school, or do I really need to go to school for this, or can I learn on my own? Is it possible to do this with no money? I'm literally broke but I want this so bad I feel like its the only career I'll enjoy. I want to call up company's and ask them what they are looking for in someone they want to hire, is that a good idea? Also I don't know the history of video game design and I don't want to sound like a dummy when someone says something about this field or talks about a famous designer and I have no idea who they're talking about. So what is key info when it comes to this field and where should I find it? Hopefully some of you guys and girls can help me out: I know in the future I will create something everyone will enjoy and you guys will remember when you gave me advice and I will always remember you guys for helping me. I'm gifted I know I am and I want to share my gift with the rest of the world by making games that change the Industry. Help me out please.

    Read the article

  • Warning: DocumentRoot * does not exist on Centos 6

    - by user1213807
    My virtual hosts lines: <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /www/docs/example.com ServerName example.com ErrorLog logs/example.com-error_log CustomLog logs/example.com.com-access_log common </VirtualHost> And when I apache restart (sudo apachectl -k stop) get this error: Warning: DocumentRoot [/www/docs/example.com] does not exist I've checked some ways: All files and directories permissions is OK, everything 755. I think, maybe this error about SeLinux and disable it. But not working. Still same error. How can I fix this problem?

    Read the article

  • Multiple CAS servers with Microsoft Exchange and selective authorization

    - by John Wilcox
    I have a Microsoft Exchange 2010 organization within one Microsoft Windows domain and I have users accessing it through OWA. For simplicity lets say I currently have one CAS server (CAS 1) which is accessible only through a VPN connection. Lets call the users connecting to the first CAS group a. For some users though, I need to install another CAS server (CAS 2) so that they can connect without using a VPN connection. Lets call those users group b. What I need to achieve is that group a can only log in to CAS 1 and group b can only log in to CAS 2. Now I know that one can disable/enable OWA per user but in my case that is not enough because OWA must be enabled for both groups.

    Read the article

  • Proper Data Structure for Commentable Comments

    - by Wesley
    Been struggling with this on an architectural level. I have an object which can be commented on, let's call it a Post. Every post has a unique ID. Now I want to comment on that Post, and I can use ID as a foreign key, and each PostComment has an ItemID field which correlates to the Post. Since each Post has a unique ID, it is very easy to assign "Top Level" comments. When I comment on a comment however, I feel like I now need a PostCommentComment, which attaches to the ID of the PostComment. Since ID's are assigned sequentially, I can no longer simply use ItemID to differentiate where in the tree the comment is assigned. I.E. both a Post and a Post Comment might have an ID of '5', so my foreign key relationship is invalid. This seems like it could go on infinitely, with PostCommentCommentComment's etc... What's the best way to solve this? Should I have a field in the comment called "IsPostComment" or something of the like to know which collection to attach the ID to? This strikes me as the best solution I've seen so far, but now I feel like I need to make recursive DataBase calls which start to get expensive. Meaning, I get a Post and get all PostComments where ItemID == Post.ID && where IsPostComment == true Then I take that as a collection, gather all the ID's of the PostComments, and do another search where ItemID == PostComment[all].ID && where IsPostComment == false, then repeat infinitely. This means I make a call for every layer, and if I'm calling 100 Posts, I might make 1000 DB calls to get 10 layers of comments each. What is the right way to do this?

    Read the article

  • Creating packages in code - Workflow

    This is just a quick one prompted by a question on the SSIS Forum, how to programmatically add a precedence constraint (aka workflow) between two tasks. To keep the code simple I’ve actually used two Sequence containers which are often used as anchor points for a constraint. Very often this is when you have task that you wish to conditionally execute based on an expression. If it the first or only task in the package you need somewhere to anchor the constraint too, so you can then set the expression on it and control the flow of execution. Anyway, back to my code sample, here’s a quick screenshot of the finished article: Now for the code, which is actually pretty simple and hopefully the comments should explain exactly what is going on. Package package = new Package(); package.Name = "SequenceWorkflow"; // Add the two sequence containers to provide anchor points for the constraint // If you use tasks, it follows exactly the same pattern, they all derive from Executable Sequence sequence1 = package.Executables.Add("STOCK:Sequence") as Sequence; sequence1.Name = "SEQ Start"; Sequence sequence2 = package.Executables.Add("STOCK:Sequence") as Sequence; sequence2.Name = "SEQ End"; // Add the precedence constraint, here we use the package's constraint collection // as it hosts the two objects we want to constrain (link) // The default constraint is a basic On Success constraint just like in the designer PrecedenceConstraint constraint = package.PrecedenceConstraints.Add(sequence1, sequence2); // Change the settings to use a (dummy) expression only constraint.EvalOp = DTSPrecedenceEvalOp.Expression; constraint.Expression = "1 == 1";   The complete code file is available to download below. SequenceWorkflow.cs

    Read the article

  • Building Publishing Pages in Code

    - by David Jacobus
    Originally posted on: http://geekswithblogs.net/djacobus/archive/2013/10/27/154478.aspxOne of the Mantras we developers try to follow: Ensure that the solution package we deliver to the client is complete.  We build Web Parts, Master Pages, Images, CSS files and other artifacts that we push to the client with a WSP (Solution Package) And then we have them finish the solution by building their site pages by adding the web parts to the site pages.       I am a proponent that we,  the developers,  should minimize this time consuming work and build these site pages in code.  I found a few blogs and some MSDN documentation but not really a complete solution that has all these artifacts working in one solution.   What I am will discuss and provide a solution for is a package that has: 1.  Master Page 2.  Page Layout 3.  Page Web Parts 4.  Site Pages   Most all done in code without the development team or the developers having to finish up the site building process spending a few hours or days completing the site!  I am not implying that in Development we do this. In fact,  we build these pages incrementally testing our web parts, etc. I am saying that the final action in our solution is that we take all these artifacts and add them to the site pages in code, the client then only needs to activate a few features and VIOLA their site appears!.  I had a project that had me build 8 pages like this as part of the solution.   In this blog post, I am taking a master page solution that I have called DJGreenMaster.  On My Office 365 Development Site it looks like this:     It is a generic master page for a SharePoint 2010 site Along with a three column layout.  Centered with a footer that uses a SharePoint List and Web Part for the footer links.  I use this master page a lot in my site development!  Easy to change the color and site logo with a little CSS.   I am going to add a few web parts for discussion purposes and then add these web parts to a site page in code.    Lets look at the solution package for DJ Green Master as that will be the basis project for building the site pages:   What you are seeing  is a complete solution to add a Master Page to a site collection which contains: 1.  Master Page Module which contains the Master Page and Page Layout 2.  The Footer Module to add the Footer Web Part 3.  Miscellaneous modules to add images, JQuery, CSS and subsite page 4.  3 features and two feature event receivers: a.  DJGreenCSS, used to add the master page CSS file to Style Sheet Library and an Event Receiver to check it in. b.  DJGreenMaster used to add the Master Page and Page Layout.  In an Event Receiver change the master page to DJGreenMaster , create the footer list and check the files in. c.  DJGreenMasterWebParts add the Footer Web Part to the site collection. I won’t go over the code for this as I will give it to you at the end of this blog post. I have discussed creating a list in code in a previous post.  So what we have is the basis to begin what is germane to this discussion.  I have the first two requirements completed.  I need now to add page web parts and the build the pages in code.  For the page web parts, I will use one downloaded from Codeplex which does not use a SharePoint custom list for simplicity:   Weather Web Part and another downloaded from MSDN which is a SharePoint Custom Calendar Web Part, I had to add some functionality to make the events color coded to exceed the built-in 10 overlays using JQuery!    Here is the solution with the added projects:     Here is a screen shot of the Weather Web Part Deployed:   Here is a screen shot of the Site Calendar with JQuery:     Okay, Now we get to the final item:  To create Publishing pages.   We need to add a feature receiver to the DJGreenMaster project I will name it DJSitePages and also add a Event Receiver:       We will build the page at the site collection level and all of the code necessary will be contained in the event receiver.   Added a reference to the Microsoft.SharePoint.Publishing.dll contained in the ISAPI folder of the 14 Hive.   First we will add some static methods from which we will call  in our Event Receiver:   1: private static void checkOut(string pagename, PublishingPage p) 2: { 3: if (p.Name.Equals(pagename, StringComparison.InvariantCultureIgnoreCase)) 4: { 5: 6: if (p.ListItem.File.CheckOutType == SPFile.SPCheckOutType.None) 7: { 8: p.CheckOut(); 9: } 10:   11: if (p.ListItem.File.CheckOutType == SPFile.SPCheckOutType.Online) 12: { 13: p.CheckIn("initial"); 14: p.CheckOut(); 15: } 16: } 17: } 18: private static void checkin(PublishingPage p,PublishingWeb pw) 19: { 20: SPFile publishFile = p.ListItem.File; 21:   22: if (publishFile.CheckOutType != SPFile.SPCheckOutType.None) 23: { 24:   25: publishFile.CheckIn( 26:   27: "CheckedIn"); 28:   29: publishFile.Publish( 30:   31: "published"); 32: } 33: // In case of content approval, approve the file need to add 34: //pulishing site 35: if (pw.PagesList.EnableModeration) 36: { 37: publishFile.Approve("Initial"); 38: } 39: publishFile.Update(); 40: }   In a Publishing Site, CheckIn and CheckOut  are required when dealing with pages in a publishing site.  Okay lets look at the Feature Activated Event Receiver: 1: public override void FeatureActivated(SPFeatureReceiverProperties properties) 2: { 3:   4:   5:   6: object oParent = properties.Feature.Parent; 7:   8:   9:   10: if (properties.Feature.Parent is SPWeb) 11: { 12:   13: currentWeb = (SPWeb)oParent; 14:   15: currentSite = currentWeb.Site; 16:   17: } 18:   19: else 20: { 21:   22: currentSite = (SPSite)oParent; 23:   24: currentWeb = currentSite.RootWeb; 25:   26: } 27: 28:   29: //create the publishing pages 30: CreatePublishingPage(currentWeb, "Home.aspx", "ThreeColumnLayout.aspx","Home"); 31: //CreatePublishingPage(currentWeb, "Dummy.aspx", "ThreeColumnLayout.aspx","Dummy"); 32: }     Basically we are calling the method Create Publishing Page with parameters:  Current Web, Name of the Page, The Page Layout, Title of the page.  Let’s look at the Create Publishing Page method:   1:   2: private void CreatePublishingPage(SPWeb site, string pageName, string pageLayoutName, string title) 3: { 4: PublishingSite pubSiteCollection = new PublishingSite(site.Site); 5: PublishingWeb pubSite = null; 6: if (pubSiteCollection != null) 7: { 8: // Assign an object to the pubSite variable 9: if (PublishingWeb.IsPublishingWeb(site)) 10: { 11: pubSite = PublishingWeb.GetPublishingWeb(site); 12: } 13: } 14: // Search for the page layout for creating the new page 15: PageLayout currentPageLayout = FindPageLayout(pubSiteCollection, pageLayoutName); 16: // Check or the Page Layout could be found in the collection 17: // if not (== null, return because the page has to be based on 18: // an excisting Page Layout 19: if (currentPageLayout == null) 20: { 21: return; 22: } 23:   24: 25: PublishingPageCollection pages = pubSite.GetPublishingPages(); 26: foreach (PublishingPage p in pages) 27: { 28: //The page allready exists 29: if ((p.Name == pageName)) return; 30:   31: } 32: 33:   34:   35: PublishingPage newPage = pages.Add(pageName, currentPageLayout); 36: newPage.Description = pageName.Replace(".aspx", ""); 37: // Here you can set some properties like: 38: newPage.IncludeInCurrentNavigation = true; 39: newPage.IncludeInGlobalNavigation = true; 40: newPage.Title = title; 41: 42: 43:   44:   45: 46:   47: //build the page 48:   49: 50: switch (pageName) 51: { 52: case "Homer.aspx": 53: checkOut("Courier.aspx", newPage); 54: BuildHomePage(site, newPage); 55: break; 56:   57:   58: default: 59: break; 60: } 61: // newPage.Update(); 62: //Now we can checkin the newly created page to the “pages” library 63: checkin(newPage, pubSite); 64: 65: 66: }     The narrative in what is going on here is: 1.  We need to find out if we are dealing with a Publishing Web.  2.  Get the Page Layout 3.  Create the Page in the pages list. 4.  Based on the page name we build that page.  (Here is where we can add all the methods to build multiple pages.) In the switch we call Build Home Page where all the work is done to add the web parts.  Prior to adding the web parts we need to add references to the two web part projects in the solution. using WeatherWebPart.WeatherWebPart; using CSSharePointCustomCalendar.CustomCalendarWebPart;   We can then reference them in the Build Home Page method.   Let’s look at Build Home Page: 1:   2: private static void BuildHomePage(SPWeb web, PublishingPage pubPage) 3: { 4: // build the pages 5: // Get the web part manager for each page and do the same code as below (copy and paste, change to the web parts for the page) 6: // Part Description 7: SPLimitedWebPartManager mgr = web.GetLimitedWebPartManager(web.Url + "/Pages/Home.aspx", System.Web.UI.WebControls.WebParts.PersonalizationScope.Shared); 8: WeatherWebPart.WeatherWebPart.WeatherWebPart wwp = new WeatherWebPart.WeatherWebPart.WeatherWebPart() { ChromeType = PartChromeType.None, Title = "Todays Weather", AreaCode = "2504627" }; 9: //Dictionary<string, string> wwpDic= new Dictionary<string, string>(); 10: //wwpDic.Add("AreaCode", "2504627"); 11: //setWebPartProperties(wwp, "WeatherWebPart", wwpDic); 12:   13: // Add the web part to a pagelayout Web Part Zone 14: mgr.AddWebPart(wwp, "g_685594D193AA4BBFABEF2FB0C8A6C1DD", 1); 15:   16: CSSharePointCustomCalendar.CustomCalendarWebPart.CustomCalendarWebPart cwp = new CustomCalendarWebPart() { ChromeType = PartChromeType.None, Title = "Corporate Calendar", listName="CorporateCalendar" }; 17:   18: mgr.AddWebPart(cwp, "g_20CBAA1DF45949CDA5D351350462E4C6", 1); 19:   20:   21: pubPage.Update(); 22:   23: } Here is what we are doing: 1.  We got  a reference to the SharePoint Limited Web Part Manager and linked/referenced Home.aspx  2.  Instantiated the a new Weather Web Part and used the Manager to add it to the page in a web part zone identified by ID,  thus the need for a Page Layout where the developer knows the ID’s. 3.  Instantiated the Calendar Web Part and used the Manager to add it to the page. 4. We the called the Publishing Page update method. 5.  Lastly, the Create Publishing Page method checks in the page just created.   Here is a screen shot of the page right after a deploy!       Okay!  I know we could make a home page look much better!  However, I built this whole Integrated solution in less than a day with the caveat that the Green Master was already built!  So what am I saying?  Build you web parts, master pages, etc.  At the very end of the engagement build the pages.  The client will be very happy!  Here is the code for this solution Code

    Read the article

  • How Can You Get More Productive In Life Sciences Sales?

    - by charles.knapp
    Only half of all doctors will meet with pharmaceutical sales reps, and that percentage continues to decrease. Furthermore, when reps are granted an opportunity to share information, the average interaction is only about a minute and a half. Concurrently, call quotas continue to increase. What does this matter? Sales reps need to spend less time on traditional planning and after-call reporting, more time making calls, and make more productive use of short presentation times. Fortunately for sales reps, Oracle offers the first life sciences CRM that is designed to double sales time and halve reporting time. In particular, our new Life Sciences Edition Offline Client is designed so that you can actually turn the screen around, so that your CRM is useful for presentations and not just reporting, whether you are connected to cloud or working offline such as in restricted clinical environments. Watch Piers Evans, Industry Strategy Director, show what this looks like in the day of a typical pharmaceutical sales representative. By use of this code snippet, I agree to the Brightcove Publisher T and C found at https://accounts.brightcove.com/en/terms-and-conditions/. -- This script tag will cause the Brightcove Players defined above it to be created as soon as the line is read by the browser. If you wish to have the player instantiated only after the rest of the HTML is processed and the page load is complete, remove the line. -- brightcove.createExperiences();

    Read the article

  • android game performance regarding timers

    - by iQue
    Im new to the game-dev world and I have a tendancy to over-simplify my code, and sometimes this costs me alot fo memory. Im using a custom TimerTask that looks like this: public class Task extends TimerTask { private MainGamePanel panel; public Task(MainGamePanel panel) { this.panel=panel; } /** * When the timer executes, this code is run. */ public void run() { panel.createEnemies(); } } this task calls this method from my view: public void createEnemies() { Bitmap bmp = BitmapFactory.decodeResource(getResources(), R.drawable.female); if(enemyCounter < 24){ enemies.add(new Enemy(bmp, this)); } enemyCounter++; } Since I call this in the onCreate-method instead of in my views contructor (because My enemies need to get width and height of view). Im wondering if this will work when I have multiple levels in game (start a new intent). And if this kind of timer really is the best way to add a delay between the spawning-time of my enemies performance-wise. adding code for my timer if any1 came here cus they dont understand timers: private Timer timer1 = new Timer(); private long delay1 = 5*1000; // 5 sec delay public void surfaceCreated(SurfaceHolder holder) { timer1.schedule(new Task(this), 0, delay1); //I call my timer and add the delay thread.setRunning(true); thread.start(); }

    Read the article

  • How a password is transmited to AD Server

    - by erdogany
    My question is how ADSI performs SetPassword operation. According to what I have read ADSI is a COM interface and it has more capabilities than AD provides through LDAP. While you are suppose to update unicodePwd attribute of a personaccount entity through LDAP, ADSI provides you SetPassword call. I know that ADSI & AD provides Kerberos during authentication. So how the password is transmitted to server when SetPassword is called? Is it raw binary unencrypted data? Or does Kerberos comes into play at this call?

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >