Search Results

Search found 59468 results on 2379 pages for 'time zones'.

Page 25/2379 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • MSBUILD ClickOnce Error: Deployment and application do not have matching security zones

    - by fande455
    We're trying to publish a ClickOnce application through msbuild. We've got it working fine for an installed version of the windows application. However, when we set install to false so that it just runs the app from the web we get the following error when we try to run the application from the URL: "Deployment and application do not have matching security zones" This works fine in IE. We only get the error message in Chrome and FireFox. Here is a sample of the project file settings. <Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="3.5"> <PropertyGroup> <SrcTreeRoot>$(MSBuildProjectDirectory)\..\..\..</SrcTreeRoot> <!--ClickOnceDeployFolder>$(WebOutputDir)\AnalyzerPC</ClickOnceDeployFolder--> <ProjectGuid>{8205E593-F400-41AE-8D6F-DEA290B2DCF9}</ProjectGuid> <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration> <Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform> <ApplicationIcon>Graphics\EDA Icon.ico</ApplicationIcon> <AssemblyName>DASHQueryBuilder</AssemblyName> <OutputType>WinExe</OutputType> <RootNamespace>TetraData.Analyzer</RootNamespace> <FileUpgradeFlags> </FileUpgradeFlags> <OldToolsVersion>2.0</OldToolsVersion> <IsWebBootstrapper>false</IsWebBootstrapper> <ManifestCertificateThumbprint>...</ManifestCertificateThumbprint> <GenerateManifests>true</GenerateManifests> <SignManifests>true</SignManifests> <SignAssembly>true</SignAssembly> <ManifestKeyFile>$(BuildDir)\Certificates\TetraDataCode.pfx</ManifestKeyFile> <ProductVersion>9.0.21022</ProductVersion> <PublishUrl>http://localhost/DASHQueryBuilder/</PublishUrl> <Install>false</Install> <!--InstallFrom>Web</InstallFrom--> <UpdateEnabled>false</UpdateEnabled> <MapFileExtensions>true</MapFileExtensions> <PublisherName>Follett Software Company</PublisherName> <TrustUrlParameters>true</TrustUrlParameters> <ApplicationRevision>0</ApplicationRevision> <UseApplicationTrust>false</UseApplicationTrust> <PublishWizardCompleted>true</PublishWizardCompleted> <BootstrapperEnabled>false</BootstrapperEnabled> </PropertyGroup> <Import Project="$(SrcTreeRoot)\Build\TaskInit.Tasks" /> <Import Project="$(MSBuildBinPath)\Microsoft.CSharp.targets" /> <Import Project="$(SrcTreeRoot)\Build\TaskOverrides.Tasks" /> <Import Project="$(MSBuildProjectDirectory)\Analyzer.csproj" /> <PropertyGroup> <PublishDir>$(WebOutputDir)\DASH Query Builder\</PublishDir> <ApplicationVersion>$(MajorMinorVersion).0.0</ApplicationVersion> </PropertyGroup> <Target Name="BeforeResolveReferences"> <Copy SourceFiles="$(MSBuildProjectDirectory)\DASHQueryBuilder.config" DestinationFiles="$(MSBuildProjectDirectory)\app.config" /> </Target> </Project>

    Read the article

  • Handle complexity in large software projects

    - by Oliver Vogel
    I am a lead developer in a larger software projects. From time to time its getting hard to handle the complexity within this project. E. g. Have the whole big picture in mind all the time Keeping track of the teammates work results Doing Code Reviews Supply management with information etc. Are there best practices/ time management techniques to handle these tasks? Are there any tools to support you having an overview?

    Read the article

  • Summer daylight time not changing on some active directory domain clients.

    - by Nick Gorbikoff
    We just had a summer daylight change in US. and pc's on my network are behaving strange, some of them change time and some didn't. My network: 2 locations both in Midwest, same time zone. Location 1: 120 pcs (windows xp & windows 200) , with 1 Active Direcotry Domain Controller on Windows 2003 Standard. A couple of windows 2000 servers (they up to date) the rest of the servers are Xen or Debian machines (all up to date) , Second location connected through OpenVPN link all pc's are running fine - but they are all connecting to our AD domain controller. Locaiton 2: 10 pcs, and a shared LAN NAS. Both of the routers/firewalls in both locations are pFsense boxes with ntp service running - but it's up to date. Tried all the usual suspects: I have all the latest updates installed restarted them domain controller is running fine most computers are running fine I have only one domain controller on my network also my firewall serves as ntp server (pfsense) but it's up to date. all of the linux machines are fine since they are querying firewall / router for the time. about 1/3 of my pcs are 1 hour behind. If I change them manually they just change back ( the way domain pc's are supposed to). I've tried everything but I can't think of anything else to try.

    Read the article

  • To Obtain EPOCH Time Value from a Packed BIT Structure in C [migrated]

    - by xde0037
    This is not a home assignment! We have a binary data file which has following data structure: (It is a 12 byte structure): I need to find out Epoch time value(total time value is packed in 42 bits as described below): Field-1 : Byte 1, Byte 2, + 6 Bits from Byte 3 Time-1 : 2 Bits from Byte 3 + Byte 4 Time-2 : Byte 5, Byte 6, Byte 7, Byte 8 Field-2 : Byte 9, Byte 10, Byte 11, Byte 12 For Field-1 and Field-2 I do not have issue as they can be taken out easily. I need time value in Epoch Time (long) as it has been packed in Bytes 5,6,7,8 and 3 and 4 as follows: (the bit structure for the time value is as follows): Bytes 5 to 8 (32 bit word) Packs time value bits from 0 thru 31 (byte 5 has 0 to 7 bits, byte 6 has 8 to 15, byte 7 has 16 to 23, byte 8 has 24 to 31). the remaining 10 bits of time value are packed in Bytes 3 and byte 4 as follows: byte 3 has 2 bits:32 and 33, and Byte 4 has remaining bits : 34 to 41. So total bits for time value is 42 bits, packed as above. I need to compute epoch value coming out of these 42 bits. How do I do it? I have done something like this but not sure it gives me correct value: typedef struct P_HEADER { unsigned int tmuNumber : 21; unsigned int time1 : 10; // Bits 6,7 from Byte-3 + 8 bits from Byte-4 unsigned int time2 : 32; // 32 bits: Bytes 5,6,7,8 unsigned int traceKey : 32; } __attribute__((__packed__)) P_HEADER; Then in the code : P_HEADER *header1; //get input string in hexa,etc..etc.. //parse the input with the header as : header1 = (P_HEADER *)inputBuf; // then print the header1->time1, header1->time2 .... long ttime = header1->time1|header1->time2; //?? is this the way to get values out? Any hint tip will be appreciated. Environment is : gcc 4.1, Linux Thanks in advance.

    Read the article

  • Time Zones for different users

    - by Ben Sinclair
    I am creating a script that allows the user to choose their own timezone... If I am to make this work, how do I store dates in the database so that every timezone will read it, and show the correct date in their timezone? Do I store the date as GMT and then when a user with the timezone GMT +10 selected views the item within my script, I show that date in GMT +10 time? Is there a better way to do this? Examples would be great :)

    Read the article

  • "date_part('epoch', now() at time zone 'UTC')" not the same time as "now() at time zone 'UTC'" in po

    - by sirlark
    I'm writing a web based front end to a database (PHP/Postgresql) in which I need to store various dates/times. The times are meant to be always be entered on the client side in the local time, and displayed in the local time too. For storage purposes, I store all dates/times as integers (UNIX timestamps) and normalised to UTC. One particular field has a restriction that the timestamp filled in is not allowed to be in the future, so I tried this with a database constraint... CONSTRAINT not_future CHECK (timestamp-300 <= date_part('epoch', now() at time zone 'UTC')) The -300 is to give 5 minutes leeway in case of slightly desynchronised times between browser and server. The problem is, this constraint always fails when submitting the current time. I've done testing, and found the following. In PostgreSQL client: SELECT now() -- returns correct local time SELECT date_part('epoch', now()) -- returns a unix timestamp at UTC (tested by feeding the value into the date function in PHP correcting for its compensation to my time zone) SELECT date_part('epoch', now() at time zone 'UTC') -- returns a unix timestamp at two time zone offsets west, e.g. I am at GMT+2, I get a GMT-2 timestamp. I've figured out obviously that dropping the "at time zone 'UTC'" will solve my problem, but my question is if 'epoch' is meant to return a unix timestamp which AFAIK is always meant to be in UTC, why would the 'epoch' of a time already in UTC be corrected? Is this a bug, or I am I missing something about the defined/normal behaviour here.

    Read the article

  • Windows 7 Change internet time settings tells me I have no permissions.

    - by Matthias Vance
    LS, While trying to solve my computer clock always running ahead (even when on, not just on every boot), I apparently broke some security settings. All I did (as far as I can remember) was stop and start the w32time service. Now, whenever I go to the "Internet time" tab, and click "Change settings..." Windows tells me I don't have permissions to do so. Facts I am a member of the Administrators group. In gpedit.msc, I checked that the Administrators group is allowed to change the system time. Kind regards, Matthias Vance

    Read the article

  • Deleted the .AppleDouble files inside my Time Machine backups - are they still OK?

    - by Jon M
    My Ubuntu server is set up to emulate a TimeCapsule (after a very long weekend following the instructions here, here and here). My macbook pro has been backing up happily to it for a month or so now, and all seems well. The other day I was tidying up the extraneous files from my music collection on the server, got a bit loose with the find command... and ended up deleting all the .AppleDouble files underneath '/', which included the Time Machine folder. Now, Time Machine still appears to work fine, it backs up regularly, I can look through all the previous versions of my files, and they seem to restore without trouble. My question is: by deleting the .AppleDouble files, have I actually broken anything? Is the TM data still good, or should I trash it and start fresh (i.e. with a new 'day 0' full backup)?

    Read the article

  • Why is ntpd not updating the time on my server?

    - by John
    I have ntpd running on my server. It's all the default settings, except I commented out its ability to be a server to other machines: # restrict -4 default kod notrap nomodify nopeer noquery # restrict -6 default kod notrap nomodify nopeer noquery restrict default ignore If I run ntpdate -q ntp.ubuntu.com, I'm told that my machine's clock is off by 7 seconds. What's going on? How can I diagnose what's happening, is there a log I can turn on? more info #1 # ntpq -np remote refid st t when poll reach delay offset jitter ============================================================================== 91.189.94.4 193.79.237.14 2 u 30 64 7 108.518 -0.136 0.361 more info #2 Here's what this looked like when I asked the question: # ntpdate -q ntp.ubuntu.com server 91.189.94.4, stratum 2, offset 7.191308, delay 0.13310 10 Jan 20:38:09 ntpdate[31055]: step time server 91.189.94.4 offset 7.191308 sec And here's what it looks like now, after restarting ntpd a couple times (I'm assuming that's what fixed it): # ntpdate -q ntp.ubuntu.com server 91.189.94.4, stratum 2, offset 0.000112, delay 0.13164 10 Jan 20:47:03 ntpdate[31419]: adjust time server 91.189.94.4 offset 0.000112 sec

    Read the article

  • tradeoffs of iSCSI vs. AFP when using Time Machine with a NAS?

    - by ajit.george
    I'm setting up a home NAS device (Synology DS409) that I'm planning to use for Time Machine backups (amongst other things). What are the tradeoffs between using iSCSI or AFP to mount the backup volume? The Synology wiki suggests that iSCSI is better if the Mac will be frequently disconnected from the network or sleeping, from the point of view of the volume automatically remounting. What about filesystem consistency? Given that unplugging a USB drive without properly unmounting it often requires the Time Machine volume to be repaired, would iSCSI have the same issues? Thanks in advance.

    Read the article

  • tradeoffs of iSCSI vs. AFP when using Time Machine with a NAS?

    - by Ajit George
    I'm setting up a home NAS device (Synology DS409) that I'm planning to use for Time Machine backups (amongst other things). What are the tradeoffs between using iSCSI or AFP to mount the backup volume? The Synology wiki suggests that iSCSI is better if the Mac will be frequently disconnected from the network or sleeping, from the point of view of the volume automatically remounting. What about filesystem consistency? Given that unplugging a USB drive without properly unmounting it often requires the Time Machine volume to be repaired, would iSCSI have the same issues?

    Read the article

  • How does Windows handle Time? Updating RTC, etc. (Active Directory and Novell E-Directory)

    - by bshacklett
    I'm troubleshooting some time issues in my domain and before making any big changes I want to have a thorough understanding of what's going on. I've got a few lingering questions at the moment: What sources (rtc, ntp, etc.) are queried by Windows to keep time? How does this differ in a mixed Active Directory / Novell environment? What is the order that each source is queried in? How does Windows decide whether to act as an NTP client, peer or server? In what situations will Windows update the RTC, if ever?

    Read the article

  • What does "Windows is not a real-time operating system" mean?

    - by hydroparadise
    I came across an application called LatencyMon, that apparently does latency monitoring. I have always understood the more of a load you put on the processor, the less responsive, or more latent, the system becomes. However, in the second section of the LatencyMon page, the first sentence says, "Windows is not a real-time operating system". That got me thinking. I mean, is this any different from any other operatiing system like linux, unix, or OS X? Are there any "Real-Time" operating systems? Or is the merely a marketing scheme to get you to buy their product? EDIT: Also, are there any examples of RTOS's out there?

    Read the article

  • Can one use Airport (Time Capsule) with an external DHCP server?

    - by DNS
    I currently share my DSL connection using a wireless router with DHCP disabled, and dnsmasq running on a Mac Mini serving DHCP & DNS. This setup is important because I have clients doing PXE boot, and I need the control over DHCP that dnsmasq provides. There is also a Time Capsule on the network that's used purely as a backup device; its wireless functions are disabled. The wireless router is starting to get a little flaky, and since it doesn't support 802.11n I'd like to replace it. Rather than buying a new router, I'd like to just use the Time Capsule. But I see no way to disable its DHCP server; when I set the connection type to PPPoE, it insists on serving DHCP. Is there any way to use Airport PPPoE with a DHCP server elsewhere on the network?

    Read the article

  • How do synchronize two folders in Windows 7 in real-time?

    - by acme
    I want Windows 7 to synchronize two folders in real time (maybe running a service that monitors a folder)? Basically I want to monitor a folder and synchronize each change (new files, changed files, deleted files) to another drive. It has to be in real time, so it gets synchronized instantly when a change happens. A one-direction synchronisation is enough. I tried Microsofts SyncToy, but it does only syncing by hand or scheduled. Can this be achieved with Windows 7 itself or does anyone know a freeware application for this?

    Read the article

  • How do synchronize two folders in Windows 7 in real-time?

    - by acme
    I want Windows 7 to synchronize two folders in real time (maybe running a service that monitors a folder)? Basically I want to monitor a folder and synchronize each change (new files, changed files, deleted files) to another drive. It has to be in real time, so it gets synchronized instantly when a change happens. A one-direction synchronisation is enough. I tried Microsofts SyncToy, but it does only syncing by hand or scheduled. Can this be achieved with Windows 7 itself or does anyone know a freeware application for this?

    Read the article

  • Right-Time Retail Part 3

    - by David Dorf
    This is part three of the three-part series.  Read Part 1 and Part 2 first. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Marketing Real-time isn’t just about executing faster; it extends to interactions with customers as well. As an industry, we’ve spent many years analyzing all the data that’s been collected. Yes, that data has been invaluable in helping us make better decisions like where to open new stores, how to assort those stores, and how to price our products. But the recent advances in technology are now making it possible to analyze and deliver that data very quickly… fast enough to impact a potential sale in near real-time. Let me give you two examples. Salesmen in car dealerships get pretty good at sizing people up. When a potential customer walks in the door, it doesn’t take long for the salesman to figure out the revenue at stake. Is this person a real buyer, or just looking for a fun test drive? Will this person buy today or three months from now? Will this person opt for the expensive packages, or go bare bones? While the salesman certainly asks some leading questions, much of information is discerned through body language. But body language doesn’t translate very well over the web. Eloqua, which was acquired by Oracle earlier this year, reads internet body language. By tracking the behavior of the people visiting your web site, Eloqua categorizes visitors based on their propensity to buy. While Eloqua’s roots have been in B2B, we’ve been looking at leveraging the technology with ATG to target B2C. Knowing what sites were previously visited, how often the customer has been to your site recently, and how long they’ve spent searching can help understand where the customer is in their purchase journey. And knowing that bit of information may be enough to help close the deal with a real-time offer, follow-up email, or online customer service pop-up. This isn’t so different from the days gone by when the clerk behind the counter of the corner store noticed you were lingering in a particular aisle, so he walked over to help you compare two products and close the sale. You appreciated the personalized service, and he knew the value of the long-term relationship. Move that same concept into the digital world and you have Oracle’s CX Suite, a cloud-based offering of end-to-end customer experience tools, assembled primarily from acquisitions. Those tools are Oracle Marketing (Eloqua), Oracle Commerce (ATG, Endeca), Oracle Sales (Oracle CRM On Demand), Oracle Service (RightNow), Oracle Social (Collective Intellect, Vitrue, Involver), and Oracle Content (Fatwire). We are providing the glue that binds the CIO and CMO together to unleash synergies that drive the top-line higher, and by virtue of the cloud-approach, keep costs at bay. My second example of real-time marketing takes place in the store but leverages the concepts of Web marketing. In 1962 the decline of personalized service in retail began. Anyone know the significance of that year? That’s when Target, K-Mart, and Walmart each opened their first stores, and over the succeeding years the industry chose scale over personal service. No longer were you known as “Jane with the snotty kid so make sure we check her out fast,” but you suddenly became “time-starved female age 20-30 with kids.” I’m not saying that was a bad thing – it was the right thing for our industry at the time, and it enabled a huge amount of growth, cheaper prices, and more variety of products. But scale alone is no longer good enough. Today’s sophisticated consumer demands scale, experience, and personal attention. To some extent we’ve delivered that on websites via the magic of cookies, your willingness to log in, and sophisticated data analytics. What store manager wouldn’t love a report detailing all the visitors to his store, where they came from, and which products that examined? People trackers are getting more sophisticated, incorporating infrared, video analytics, and even face recognition. (Next time you walk in front on a mannequin, don’t be surprised if it’s looking back.) But the ultimate marketing conduit is the mobile phone. Since each mobile phone emits a unique number on WiFi networks, it becomes the cookie of the physical world. Assuming congress keeps privacy safeguards reasonable, we’ll have a win-win situation for both retailers and consumers. Retailers get to know more about the consumer’s purchase journey, and consumers get higher levels of service with the retailer. When I call my bank, a couple things happen before the call is connected. A reverse look-up on my phone number identifies me so my accounts can be retrieved from Siebel CRM. Then the system anticipates why I’m calling based on recent transactions. In this example, it sees that I was just charged a foreign currency fee, so it assumes that’s the reason I’m calling. It puts all the relevant information on the customer service rep’s screen as it connects the call. When I complain about the fee, the rep immediately sees I’m a great customer and I travel lots, so she suggests switching me to their traveler’s card that doesn’t have foreign transaction fees. That technology is powered by a product called Oracle Real-Time Decisions, a rules engine built to execute very quickly, basically in the time it takes the phone to ring once. So let’s combine the power of that product with our new-found mobile cookie and provide contextual customer interactions in real-time. Our first opportunity comes when a customer crosses a pre-defined geo-fence, typically a boundary around the store. Context is the key to our interaction: that’s the customer (known or anonymous), the time of day and day of week, and location. Thomas near the downtown store on a Wednesday at noon means he’s heading to lunch. If he were near the mall location on a Saturday morning, that’s a completely different context. But on his way to lunch, we’ll let Thomas know that we’ve got a new shipment of ASICS running shoes on display with a simple text message. We used the context to look-up Thomas’ past purchases and understood he was an avid runner. We used the fact that this was lunchtime to select the type of message, in this case an informational message instead of an offer. Thomas enters the store, phone in hand, and walks to the shoe department. He scans one of the new ASICS shoes using the convenient QR Codes we provided on the shelf-tags, but then he starts scanning low-end Nikes. Each scan is another opportunity to both learn from Thomas and potentially interact via another message. Since he historically buys low-end Nikes and keeps scanning them, he’s likely falling back into his old ways. Our marketing rules are currently set to move loyal customer to higher margin products. We could have set the dials to increase visit frequency, move overstocked items, increase basket size, or many other settings, but today we are trying to move Thomas to higher-margin products. We send Thomas another text message, this time it’s a personalized offer for 10% off ASICS good for 24 hours. Offering him a discount on Nikes would be throwing margin away since he buys those anyway. We are using our marketing dollars to change behavior that increases the long-term value of Thomas. He decides to buy the ASICS and scans the discount code on his phone at checkout. Checkout is yet another opportunity to interact with Thomas, so the transaction is sent back to Oracle RTD for evaluation. Since Thomas didn’t buy anything with the shoes, we’ll print a bounce-back coupon on the receipt offering 30% off ASICS socks if he returns within seven days. We have successfully started moving Thomas from low-margin to high-margin products. In both of these marketing scenarios, we are able to leverage data in near real-time to decide how best to interact with the customer and lead to an increase in the lifetime value of the customer. The key here is acting at the moment the customer shows interest using the context of the situation. We aren’t pushing random products at haphazard times. We are tailoring the marketing to be very specific to this customer, and it’s the technology that allows this to happen in near real-time. Conclusion As we enable more right-time integrations and interactions, retailers will begin to offer increased service to their customers. Localized and personalized service at scale will drive loyalty and lead to meaningful revenue growth for the retailers that execute well. Our industry needs to support Commerce Anywhere…and commerce anytime as well.

    Read the article

  • What's a standard productive vs total office hours ratio? [migrated]

    - by marianov
    So it goes like this: we are keeping track of tasks using Redmine. We log time spent doing tasks, but at the end of the week if we add up all the time spent at those tasks there is no way a person has spent 40hs working. I think that's correct because offices have overhead (reading emails, politics, coffee, distractions). What would be a normal productive time vs total time spent ratio? Other areas in the organization just measure time spent in the office (with the rfid badges that open the door) but we don't like that approach and we are trying to convince Auditing to measure us using redmine instead.

    Read the article

  • Improve speed of "start menu" in Linux Mint 10 - Ubuntu 10.10 derivative [closed]

    - by Gabriel L. Oliveira
    I have a global menu (including application, administration and system tabs) that is taking too much time (for me) to load (about 2.5 seconds). Of course, this time is taken only during first start. After it have loaded, next times are better ( less than 0.2 miliseconds) The menu was taking more time before (about 5 seconds), and I found that was because of the 'Other' part of the menu, that included many applications installed with Wine, so I removed all of them (I didn't need them at all). I have a "normal" knowledge of programming, and I think that the process of starting the menu for the first time has some kind of "cache function", that tries to find which apps are present that need to be placed under menu to be shown to user. But didn't found this function so that I could analyze in details what he is doing (if searching for files under "~/.local/share/applications" or anything else). Also, I found that hitting "Alt-F2" also fires this "cache function", because after waiting it to load, the process of opening the menu took less than 0.2 miliseconds. So, could anyone help me in order to reduce this time? I found on internet that some user could reduce the time by resizing the icons of applications. But found here that most of my icons are already at 25x25 size. Any other idead? Maybe a multiprocess to load it, or include it under startup... don't know. Ps: Sorry if this is an awkward question, but I just do not like waiting for things to happen, and think that this process should be smoother than it's now. Also, thanks in advance!

    Read the article

  • Should the number of developers be considered when estimating a task?

    - by Ludwig Magnusson
    I am pretty inexperienced with working in agile projects but I have tried it a few times and I always run into this problem when estimating a task. Do we bring into the estimate the number of developers that will work on the task? Let me explain: Task A is estimated to one time unit and developer 1 will work on it. Task B is also estimated to one time unit and developer 2 and 3 will work on it together. I.e. if developer 1 begins to work on task A at the same time developer 2 and 3 begins to work on task B they will all finish at the same time according to the estimate. Should the estimate for task B be twice of that for task A or the same? The problem as I see it is that when a task is received and estimated, it is not always possible to know how many people will work on it. And if you assumed that two developers would work on the task for one time unit but it turns out that only one developer will actually do it, this will not automatically mean that that developer will work on it for two time units. Is there any standard practice for this?

    Read the article

  • How should I fix problems with file permissions while restoring from Time Machine?

    - by Andrew Grimm
    While restoring files from a Time Machine backup, I got the error message "The operation can’t be completed because you don’t have permission to access some of the items." because of problems with files in one folder. What's the safest way to deal with this? The folder in question has permissions like: Andrew-Grimms-MacBook-Pro:kmer agrimm$ pwd /Volumes/Time Machine Backups/Backups.backupdb/Andrew Grimm’s MacBook Pro/2010-12-09-224309/Macintosh HD/Users/agrimm/ruby/kmer Andrew-Grimms-MacBook-Pro:kmer agrimm$ ls -ltra total 6156896 drwxrwxrwx@ 19 agrimm staff 680 18 Jan 2008 Saccharomyces_cerevisiae -r--------@ 1 agrimm staff 60221852 4 Aug 2009 hs_ref_GRCh37_chrY.fa -r--------@ 1 agrimm staff 157488804 4 Aug 2009 hs_ref_GRCh37_chrX.fa (snip a few files) -r--------@ 1 agrimm staff 676063 27 Oct 2009 NC_001143.fna -rw-r--r--@ 1 agrimm staff 6148 23 Mar 2010 .DS_Store drwxr-xr-x@ 3 agrimm staff 1530 23 Mar 2010 . drwxr-xr-x@ 30 agrimm staff 1054 20 Nov 14:43 .. Is it ok to do sudo chmod, or is there a safer approach? Background: Files within the original folder on my computer also had weird permissions - I suspect I may have used sudo to copy some files from a thumbdrive onto my computer.

    Read the article

  • How do I set a time in a time_select view helper?

    - by brad
    I have a time_select in which I am trying to set a time value as follows; <%= f.time_select :start_time, :value => (@invoice.start_time ? @invoice.start_time : Time.now) %> This always produces a time selector with the current time rather than the value for @invoice.start_time. @invoice.start_time is in fact a datetime object but this is passed to the time selector just fine if I use <%= f.time_select :start_time %> I guess what I'm really asking is how to use the :value option with the time_select helper. Attempts like the following don't seem to produce the desired result; <%= f.time_select :start_time, :value => (Time.now + 2.hours) %> <%= f.time_select :start_time, :value => "14:30" %>

    Read the article

  • Is typeid of type name always evaluated at compile time in c++ ?

    - by cyril42e
    I wanted to check that typeid is evaluated at compile time when used with a type name (ie typeid(int), typeid(std::string)...). To do so, I repeated in a loop the comparison of two typeid calls, and compiled it with optimizations enabled, in order to see if the compiler simplified the loop (by looking at the execution time which is 1us when it simplifies instead of 160ms when it does not). And I get strange results, because sometimes the compiler simplifies the code, and sometimes it does not. I use g++ (I tried different 4.x versions), and here is the program: #include <iostream> #include <typeinfo> #include <time.h> class DisplayData {}; class RobotDisplay: public DisplayData {}; class SensorDisplay: public DisplayData {}; class RobotQt {}; class SensorQt {}; timespec tp1, tp2; const int n = 1000000000; int main() { int avg = 0; clock_gettime(CLOCK_REALTIME, &tp1); for(int i = 0; i < n; ++i) { // if (typeid(RobotQt) == typeid(RobotDisplay)) // (1) compile time // if (typeid(SensorQt) == typeid(SensorDisplay)) // (2) compile time if (typeid(RobotQt) == typeid(RobotDisplay) || typeid(SensorQt) == typeid(SensorDisplay)) // (3) not compile time ???!!! avg++; else avg--; } clock_gettime(CLOCK_REALTIME, &tp2); std::cout << "time (" << avg << "): " << (tp2.tv_sec-tp1.tv_sec)*1000000000+(tp2.tv_nsec-tp1.tv_nsec) << " ns" << std::endl; } The conditions in which this problem appear are not clear, but: - if there is no inheritance involved, no problem (always compile time) - if I do only one comparison, no problem - the problem only appears only with a disjunction of comparisons if all the terms are false So is there something I didn't get with how typeid works (is it always supposed to be evaluated at compilation time when used with type names?) or may this be a gcc bug in evaluation or optimization? About the context, I tracked down the problem to this very simplified example, but my goal is to use typeid with template types (as partial function template specialization is not possible). Thanks for your help!

    Read the article

  • Speakers, Please Check Your Time

    - by AjarnMark
    Woodrow Wilson was once asked how long it would take him to prepare for a 10 minute speech. He replied "Two weeks". He was then asked how long it would take for a 1 hour speech. "One week", he replied. 2 hour speech? "I'm ready right now," he replied.  Whether that is a true story or an urban legend, I don’t really know, but either way, it is a poignant reminder for all speakers, and particularly apropos this week leading up to the PASS Community Summit. (Cross-posted to the PASS Professional Development Virtual Chapter blog #PASSProfDev.) What’s the point of that story?  Simply this…if you have plenty of time to do your presentation, you don’t need to prepare much because it is easy to throw in more and more material to stretch out to your allotted time.  But if you are on a tight time constraint, then it will take significant preparation to distill your talk down to only the essential points. I have attended seven of the last eight North American Summit events, and every one of them has been fantastic.  The speakers are great, the material is timely and relevant, and the networking opportunities are awesome.  And every year, there is one little thing that just bugs me…speakers going over their allotted time.  Why does it bother me so?  Well, if you look at a typical schedule for a Summit, you’ll see that there are six or more sessions going on at the same time, and only 15 minutes to move from one to another.  If you’re trying to maximize your training dollar by attending something during every session time slot, and you don’t want to be the last guy trying to squeeze into the middle of the row, then those 15 minutes can be critical.  All the more so if you need to stop and use the bathroom or if you have to hike to the opposite end of the convention center.  It is really a bad position to find yourself having to choose between learning the last key points of Speaker A who is going over time, and getting over to Speaker B on time so you don’t miss her key opening remarks. And frankly, I think it is just rude.  Yes, the speakers are the function, after all they are bringing the content that the rest of us are paying to learn.  But it is also an honor to be given the opportunity to speak at a conference like this, and no one speaker is so important that the conference would be a disaster without him.  Speakers know when they submit their abstract, long before the conference, how much time they will have.  It has been the same pattern at the Summit for at least the last eight years.  Program Sessions are 75 minutes long.  Some speakers who have a good track record, and meet other qualifying criteria, are extended an invitation to present a Spotlight Session which is 90 minutes (a 20% increase).  So there really is no excuse.  It’s not like you were promised a 2-hour segment and then discovered when you got here that it was only 75 minutes.  In fact, it’s not like PASS advertised 90-minute sessions for everyone and then a select few were cut back to only 75.  As a speaker, you know well before you get here which type of session you are doing and how long it is, so as a professional, you should plan accordingly. Now you might think that this only happens to rookies, but I’ll tell you that some of the worst offenders are big-name veterans who draw huge attendance numbers for their sessions.  Some attendees blow this off as, “Hey, it’s so-and-so, and I’d stay here for hours and listen to him/her talk.”  To which I would reply, “Then they should have submitted for a pre- or post-conference day-long seminar instead, but don’t try to squeeze your day-long talk into a 90-minute session.”  Now I don’t really believe that these speakers are being malicious or just selfishly trying to extend their time in the spotlight.  I think that most of them are merely being undisciplined and did not trim their presentation sufficiently, or allowed themselves to get off-track (often in a generous attempt to help someone in the audience with a question or problem that really should have been noted for further discussion after the session). So here is my recommendation…my plea, even.  TRIM THE FAT!  Now.  Before it’s too late.  Before you even get on the airplane, take a long, hard look at your presentation and eliminate some of the points that you originally thought you had to make, but in reality are not truly crucial to your main topic.  Delete a few slides.  Test your demos and have them already scripted rather than typing them during your talk.  It is better to cut out too much and end up with plenty of time at the end for Questions & Answers.  And you can always keep some notes on the stuff that you cut out so that you could fill it back in at the end as bonus material if you really do end up with a whole bunch of time on your hands.  But I don’t think you will.  And if you do, that will look even better to the audience as it will look like you’re giving them something extra that not every audience gets.  And they will thank you for that.

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >