Search Results

Search found 6028 results on 242 pages for 'total commander'.

Page 128/242 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • How to create per-vertex normals when reusing vertex data?

    - by Chris Smith
    I am displaying a cube using a vertex buffer object (gl.ELEMENT_ARRAY_BUFFER). This allows me to specify vertex indicies, rather than having duplicate vertexes. In the case of displaying a simple cube, this means I only need to have eight vertices total. Opposed to needing three vertices per triangle, times two triangles per face, times six faces. Sound correct so far? My question is, how do I now deal with vertex attribute data such as color, texture coordinates, and normals when reusing vertices using the vertex buffer object? If I am reusing the same vertex data in my indexed vertex buffer, how can I differentiate when vertex X is used as part of the cube's front face versus the cube's left face? In both cases I would like the surface normal and texture coordinates to be different. I understand I could average the surface normal, however I would like to render a cube. Also, this still doesn't work for texture coordinates. Is there a way to save memory using a vertex buffer object while being able to provide different vertex attribute data based on context? (Per-triangle would be idea.) Or should I just duplicate each vertex for each context in which it gets rendered. (So there is a one-to-one mapping between vertex, normal, color, etc.) Note: I'm using OpenGL ES.

    Read the article

  • SQL SERVER – Load Generator – Free Tool From CodePlex

    - by pinaldave
    One of the most common questions I receive is if there any tool available to generate load on SQL Server. Absolutely there is a fabulous free tool available to generate load on SQL Server on Codeplex. This tool was released in 2008 but it is still extremely relevant to generate the load on SQL Server as well works fabulously. CodePlex is a project initiated by Microsoft for hosting open source softwares. The best part of this SQL Server Load Generator is that users can run multiple simultaneous queries again SQL Server using different login account and different application name. The interface of the tool is extremely easy to use and very intuitive as well. One of the things which I felt needed improvement was a default configuration. As every single time when I was adding a query the default settings were showing up and I had to manually change that. However, when I went to Menu >> Tools >>Options I was really happy as it has options to change every single default which is available. Here one can give default username, password, database name as well various settings related to configuration. Additionally application logging is also possible through the options. A couple of other important points I noticed was the button to reset counters as well status bar containing useful information of Total Threads, Completed Queries and Failed Queries. I use this frequently for my load testing. What tool do you use for SQL Server Load Generator? Download SQL Load Generator Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • Open source clone for Starcraft

    - by sinekonata
    Two questions about a SC:Broodwar clone. Is there one yet? How likely is legal pursuit? Since almost all games I usually play now have an FOS alternative from alpha to way polished, I was wondering why can't I find one for SC, one of the biggest titans of the gaming community? So my first question is, is there a game that was made with the intention to emulate SC? Is it that I didn't look well enough? Could it really be that no one tackled what seems like a small effort compared to the creation of a game engine like Spring or games like Rigs of Rod or Minetest? And since SC is not being maintained at all shouldn't the incentive to see a bug free modable balanced version huge? What am I not getting here? In the event that there is none, is it a legal problem? Could it be that people expect Blizzard to release sources themselves? Or that developers don't see the point in having SC mechanics without the patented lore and aesthetics? And the trickier question, if I were to make SC an open source game, a total clone of it for the purposes of maintenance, modability, etc. Would Blizzard really sue a team of developer fans that just do them a favour knowing they don't lose any money from Korea broadcasts? Or would they do it not to set precedents. So thanks for reading all that, hope I'm not the only one to think it's weird that no one talks about it. See you.

    Read the article

  • Big AdventureWorks2012

    - by jamiet
    Last week I launched AdventureWorks on Azure, an initiative to make SQL Azure accessible to anyone, in my blog post AdventureWorks2012 now available for all on SQL Azure. Since then I think its fair to say that the reaction has been lukewarm with 31 insertions into the [dbo].[SqlFamily] table and only 8 donations via PayPal to support it; on the other hand those 8 donators have been incredibly generous and we nearly have enough in the bank to cover a full year’s worth of availability. It was always my intention to try and make this offering more appealing and to that end I have used an adapted version of Adam Machanic’s make_big_adventure.sql script to massively increase the amount of data in the database and give the community more scope to really push SQL Azure and see what it is capable of. There are now two new tables in the database: [dbo].[bigProduct] with 25200 rows [dbo].[bigTransactionHistory] with 7827579 rows The credentials to login and use AdventureWorks on Azure are as they were before: Server mhknbn2kdz.database.windows.net Database AdventureWorks2012 User sqlfamily Password sqlf@m1ly Remember, if you want to support AdventureWorks on Azure simply click here to launch a pre-populated PayPal Send Money form - all you have to do is login, fill in an amount, and click Send. We need more donations to keep this up and running so if you think this is useful and worth supporting, please please donate.   I mentioned that I had to adapt Adam’s script, the main reasons being: Cross-database queries are not yet supported in SQL Azure so I had to create a local copy of [dbo].[spt_values] rather than reference that in [master] SELECT…INTO is not supported in SQL Azure The 1GB limit of SQLAzure web edition meant that there would not be enough space to store all the data generated by Adam’s script so I had to decrease the total number of rows. The amended script is available on my SkyDrive at https://skydrive.live.com/redir.aspx?cid=550f681dad532637&resid=550F681DAD532637!16756&parid=550F681DAD532637!16755 @Jamiet

    Read the article

  • Tuning B2B Server Engine Threads in SOA Suite 11g

    - by Shub Lahiri, A-Team
    Background B2B 11g has a number of parameters that can be tweaked to tune the engine for handling high volumes of messages. These parameters are also known as B2B server properties and managed via the EM console.  This note highlights one aspect of the tuning exercise and describes the different threads, that can be configured to tune the performance of a B2B server. Symptoms The most common indicator of a B2B engine in need of a tuning is reflected in the constant build-up of messages in an internal JMS queue within the B2B server. It is called B2B_EVENT_QUEUE and can be monitored via the Weblogic server console. Whenever such a behaviour is seen, it usually results in general degradation of performance. Remedy There could be many contributing factors behind a B2B server's degradation of performance. However, one of the first places to tune the server from the out-of-the-box, default configuration is to change the number of internal engine threads allocated within the B2B server. Usually the default configuration for the B2B server engine threads is not suitable for high-volume of messaging loads. So, it is necessary to increase the counts for 3 types of such threads, by specifying the appropriate B2B server properties via the EM console, namely, Inbound - b2b.inboundThreadCount Outbound - b2b.outboundThreadCount Default - b2b.defaultThreadCount The function of these threads are fairly self-explanatory. In other words, the inbound threads process the inbound messages that are coming into the B2B server from an external endpoint. Similarly, the outbound threads processes the messages that are sent out from the B2B server. The default threads are responsible for certain B2B server-specific special tasks. In case the inbound and outbound thread counts are not specified, the default thread count also dictates the total number of inbound and outbound threads. As found in any tuning exercise, the optimisation of these threads is usually reached via an iterative process. The best working combination of the thread counts are directly related to the system infrastructure, traffic load and several other environmental factors.

    Read the article

  • ArchBeat Link-o-Rama for October 22, 2013

    - by OTN ArchBeat
    The road ahead for WebLogic 12c | Edwin Biemond Oracle ACE Edwin Biemond shares his thoughts on announced new features in Oracle WebLogic 12.1.3 & 12.1.4 and compares those upcoming releases to Oracle WebLogic 12.1.2. Oracle BI Apps 11.1.1.7.1 – GoldenGate Integration - Part 2: Setup and Configuration | Michael Rainey Michael Rainey continues his series with another technical article for you GoldenGate fans. There's A Virtual Developer Day in Your Future Have you experienced OTN VDD? Relax, it's not something that requires medical attention. But an OTN Virtual Developer Day event will enlarge your brain with hands-on information on Oracle technologies. Upcoming events will cover Oracle WebLogic and Coherence (Nov 5) and Oracle ADF (Nov 19). My Summary of Oracle Open World 2013 | Luis Weir SOA/Middleware specialist Luis Weir's first trip to Oracle OpenWorld was what you might call a total immersion experience. His blog post includes details about what kept him very, very busy during his OOW13 experience. Live Blog: Book Review of Building Modular Cloud Apps with OSGi by Bert Ertman and Paul Bakker | Lucas Jellema This interesting post from Oracle ACE Director Lucas Jellema is a work in progress. He's updating as he goes. Check it out. Thought for the Day padding-left: 20px; padding-right: 20px;"> "In the information age, you don't teach philosophy as they did after feudalism. You perform it. If Aristotle were alive today he'd have a talk show." — Timothy Leary, American psychologist and writer (October 22, 1920 – May 31, 1996) Source: brainyquote.com

    Read the article

  • How can I find the shortest path between two subgraphs of a larger graph?

    - by Pops
    I'm working with a weighted, undirected multigraph (loops not permitted; most node connections have multiplicity 1; a few node connections have multiplicity 2). I need to find the shortest path between two subgraphs of this graph that do not overlap with each other. There are no other restrictions on which nodes should be used as start/end points. Edges can be selectively removed from the graph at certain times (as explained in my previous question) so it's possible that for two given subgraphs, there might not be any way to connect them. I'm pretty sure I've heard of an algorithm for this before, but I can't remember what it's called, and my Google searches for strings like "shortest path between subgraphs" haven't helped. Can someone suggest a more efficient way to do this than comparing shortest paths between all nodes in one subgraph with all nodes in the other subgraph? Or at least tell me the name of the algorithm so I can look it up myself? For example, if I have the graph below, the nodes circled in red might be one subgraph and the nodes circled in blue might be another. The edges would all have positive integer weights, although they're not shown in the image. I'd want to find whatever path has the shortest total cost as long as it starts at a red node and ends at a blue node. I believe this means the specific node positions and edge weights cannot be ignored. (This is just an example graph I grabbed off Wikimedia and drew on, not my actual problem.)

    Read the article

  • Introducing the Industry's First Analytics Machine, Oracle Exalytics

    - by Manan Goel
    Analytics is all about gaining insights from the data for better decision making. The business press is abuzz with examples of leading organizations across the world using data-driven insights for strategic, financial and operational excellence. A recent study on “data-driven decision making” conducted by researchers at MIT and Wharton provides empirical evidence that “firms that adopt data-driven decision making have output and productivity that is 5-6% higher than the competition”. The potential payoff for firms can range from higher shareholder value to a market leadership position. However, the vision of delivering fast, interactive, insightful analytics has remained elusive for most organizations. Most enterprise IT organizations continue to struggle to deliver actionable analytics due to time-sensitive, sprawling requirements and ever tightening budgets. The issue is further exasperated by the fact that most enterprise analytics solutions require dealing with a number of hardware, software, storage and networking vendors and precious resources are wasted integrating the hardware and software components to deliver a complete analytical solution. Oracle Exalytics In-Memory Machine is the world’s first engineered system specifically designed to deliver high performance analysis, modeling and planning. Built using industry-standard hardware, market-leading business intelligence software and in-memory database technology, Oracle Exalytics is an optimized system that delivers answers to all your business questions with unmatched speed, intelligence, simplicity and manageability. Oracle Exalytics’s unmatched speed, visualizations and scalability delivers extreme performance for existing analytical and enterprise performance management applications and enables a new class of intelligent applications like Yield Management, Revenue Management, Demand Forecasting, Inventory Management, Pricing Optimization, Profitability Management, Rolling Forecast and Virtual Close etc. Requiring no application redesign, Oracle Exalytics can be deployed in existing IT environments by itself or in conjunction with Oracle Exadata and/or Oracle Exalogic to enable extreme performance and best in class user experience. Based on proven hardware, software and in-memory technology, Oracle Exalytics lowers the total cost of ownership, reduces operational risk and provides unprecedented analytical capability for workgroup, departmental and enterprise wide deployments. Click here to learn more about Oracle Exalytics.  

    Read the article

  • SD-CARD reader does not show in ubuntu

    - by shantanu
    I bought Acer asipre 4250. It have built-in SD card reader. But it is not working. Nothing show in /media or fdisk but something in dmesg. dmesg: new high-speed USB device number 3 using ehci_hcd [ 127.396733] scsi5 : usb-storage 2-2:1.0 [ 128.526562] scsi 5:0:0:0: Direct-Access Multiple Card Reader 1.00 PQ: 0 ANSI: 0 [ 128.532512] sd 5:0:0:0: Attached scsi generic sg2 type 0 [ 129.008110] ohci_hcd 0000:00:12.0: PCI INT A disabled [ 129.032083] ohci_hcd 0000:00:13.0: PCI INT A disabled [ 129.056411] ohci_hcd 0000:00:16.0: PCI INT A disabled [ 129.338026] sd 5:0:0:0: [sdb] Attached SCSI removable disk [ 129.808328] ohci_hcd 0000:00:14.5: PCI INT C disabled [ 167.728616] usb 2-2: USB disconnect, device number 3 [ 169.872284] ehci_hcd 0000:00:13.2: PCI INT B disabled [ 169.872340] ehci_hcd 0000:00:13.2: PME# enabled fdisk -l: Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0006bc6d Device Boot Start End Blocks Id System /dev/sda1 * 2048 48828415 24413184 7 HPFS/NTFS/exFAT /dev/sda2 48828416 50829311 1000448 82 Linux swap / Solaris /dev/sda3 50829312 99657727 24414208 83 Linux /dev/sda4 99659774 625141759 262740993 5 Extended Partition 4 does not start on physical sector boundary. /dev/sda5 99659776 275439615 87889920 7 HPFS/NTFS/exFAT /dev/sda6 275441664 451221503 87889920 7 HPFS/NTFS/exFAT /dev/sda7 451223552 625141759 86959104 7 HPFS/NTFS/exFAT I found another problem just right now. I format last three drives as EXT4 with disk utility. But they are showing as NTFS/exFAT in fdisk. :-(

    Read the article

  • How to know which partition is which?

    - by user206870
    Well I was just wondering what partition belongs to which. On my computer I have Windows 7 and two Ubuntu systems (it was an accident, which is why I need to know which partition is which). So how do I know which one is which?? PS here's the codes: jp@jp-Satellite-L555D:~$ sudo update-grub [sudo] password for jp: Generating grub.cfg ... Found linux image: /boot/vmlinuz-3.11.0-12-generic Found initrd image: /boot/initrd.img-3.11.0-12-generic Found memtest86+ image: /boot/memtest86+.bin Found Windows 7 (loader) on /dev/sda1 Found Windows 7 (loader) on /dev/sda2 Found Windows Recovery Environment (loader) on /dev/sda3 Found Ubuntu 13.10 (13.10) on /dev/sda7 done jp@jp-Satellite-L555D:~$ sudo fdisk -l Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xf6f5148e Device Boot Start End Blocks Id System /dev/sda1 * 2048 3074047 1536000 27 Hidden NTFS WinRE /dev/sda2 3074048 213421022 105173487+ 7 HPFS/NTFS/exFAT /dev/sda3 469676032 488396799 9360384 17 Hidden HPFS/NTFS /dev/sda4 213422078 469676031 128126977 5 Extended /dev/sda5 300185600 463910911 81862656 83 Linux /dev/sda6 463912960 469676031 2881536 82 Linux swap / Solaris /dev/sda7 213422080 300185599 43381760 83 Linux Partition table entries are not in disk order Thanks to whoever can answer this. Another quick question, what is the extended partition??

    Read the article

  • How to install Oracle Database 11g Express Edition on Ubuntu 12.10?

    - by Praneeth Pj
    I installed the Oracle database following the steps mentioned in this blog. Downloaded 11g express edition Created a new user oracle under the group dba. Following steps are executed using this. Unzipped oracle-xe-11.2.0-1.0.x86_64.rpm.zip and then converted the rpm to the Ubuntu package by running: sudo alien --scripts -d oracle-xe-11.2.0-1.0.x86_64.rpm Created /sbin/chkconfig file and added the entries as specified there. Created /etc/sysctl.d/60-oracle.conf and added the entries as specified in the same link as above. Running the commands: ln -s /usr/bin/awk /bin/awk mkdir /var/lock/subsys touch /var/lock/subsys/listener .deb generated in step 3: sudo dpkg --install oracle-xe_11.2.0-2_amd64.deb Left the default values as it is: sudo /etc/init.d/oracle-xe configure Set the following env variables in ~/.bashrc file: export ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe export ORACLE_SID=XE export NLS_LANG=`$ORACLE_HOME/bin/nls_lang.sh` export ORACLE_BASE=/u01/app/oracle export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH export PATH=$ORACLE_HOME/bin:$PATH Running the commands: chown -R oracle:dba /var/tmp/.oracle chmod -R 755 /var/tmp/.oracle chown -R oracle:dba /tmp/.oracle chmod -R 755 /tmp/.oracle Starting Oracle Database 11g Express Edition instance: sudo service oracle-xe start sqlplus / as sysdba and got the following: SQL*Plus: Release 11.2.0.2.0 Production on Thu Jan 3 09:41:58 2013 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to an idle instance. Now when exectuting any SQL statements on SQLplus, I end up with the following error: SQL> select * from dual; select * from dual * ERROR at line 1: ORA-01034: ORACLE not available Process ID: 0 Session ID: 0 Serial number: 0 I have increased the swap memory as specified here $ free -m total used free shared buffers cached Mem: 3901 3428 473 0 182 1988 -/+ buffers/cache: 1258 2643 Swap: 5066 0 5066

    Read the article

  • How Many Google +1's Does a Website need in order for Google WebMaster's Tools to Show Characteristics

    - by Asaph
    I have added the Google +1 Button to my website and discovered the new Social Activity section in Google WebMaster's Tools. Apparently, one of the interesting things you can learn about your audience is demographic data. But in GWT, the Social Activity Audience section for my site (currently 82 +1's), says the following: Your site doesn’t have enough +1's yet to show characteristics But I'm not sure how many +1's is enough. Google's official help page for the Audience section offers little insight: The Audience page displays information about people who have +1'd your pages, including the total number of unique users, their location, and their age and gender. All information is anonymized; Google doesn't share personal information about people who have +1’d your pages. To protect privacy, Google won't display age, gender, or location data unless a certain minimum number of people have +1'd your content. But what is that "certain minimum number"? I've tried Googling this but all I could find to date was this page which doesn't answer the question. So how many +1's does a site need before GWT will show me audience demographic characteristics?

    Read the article

  • Oracle Global HR Cloud Implementation Training Can Help Meet Your Business Needs

    - by HCM-Oracle
    By Jim Vonick A key goal for the deployment of your Oracle Global HR Cloud applications is to accelerate the implementation and adoption of your applications, so that your business can start realizing all of the benefits that this rich solution offers.    Implementation team members need to have the skills and knowledge to ensure a smooth, rapid and successful implementation of your applications. During set-up, you want to optimize the configuration to best meet your business needs. In order to do this you need to understand the foundation and configuration options of your applications, so that decisions can be made during set-up that best align with your business.  To that end product level implementation training is recommended for Oracle Global HR Cloud deployments. Training For Implementation Team Members and Consultants Fusion Applications: HCM Security: Learn how to implement security for Oracle Fusion HCM applications by creating and customizing roles. You'll learn how to create security profiles to restrict data access, provision roles to users, create and manage user accounts, and verify security setup. Fusion Applications: HCM Global Human Resources: Learn how to set up your enterprise and workforce structures, how to perform functional tasks, and how to configure security for Global Human Resources data. Fusion Applications: HCM Compensation: Learn how to implement, configure, and use Oracle Fusion Compensation to manage base pay, individual compensation, workforce compensation, and total compensation statements. Fusion Applications: HCM Benefits: This course teaches you to implement, configure and manage Oracle Fusion Benefits, including how to implement benefit plans and programs.  Fusion Applications: HCM Payroll Implementation (US): This course provides implementation training for payroll managers or payroll administrators. Learn how to process payroll to ensure accurate setup results.  Learn More: See all Fusion HCM Training Jim Vonick is a Senior Product Manager with Oracle University focusing on training for Oracle Applications and Industry Solutions.

    Read the article

  • How to determine if you should use full or differential backup?

    - by Peter Larsson
    Or ask yourself, "How much of the database has changed since last backup?". Here is a simple script that will tell you how much (in percent) have changed in the database since last backup. -- Prepare staging table for all DBCC outputs DECLARE @Sample TABLE         (             Col1 VARCHAR(MAX) NOT NULL,             Col2 VARCHAR(MAX) NOT NULL,             Col3 VARCHAR(MAX) NOT NULL,             Col4 VARCHAR(MAX) NOT NULL,             Col5 VARCHAR(MAX)         )   -- Some intermediate variables for controlling loop DECLARE @FileNum BIGINT = 1,         @PageNum BIGINT = 6,         @SQL VARCHAR(100),         @Error INT,         @DatabaseName SYSNAME = 'Yoda'   -- Loop all files to the very end WHILE 1 = 1     BEGIN         BEGIN TRY             -- Build the SQL string to execute             SET     @SQL = 'DBCC PAGE(' + QUOTENAME(@DatabaseName) + ', ' + CAST(@FileNum AS VARCHAR(50)) + ', '                             + CAST(@PageNum AS VARCHAR(50)) + ', 3) WITH TABLERESULTS'               -- Insert the DBCC output in the staging table             INSERT  @Sample                     (                         Col1,                         Col2,                         Col3,                         Col4                     )             EXEC    (@SQL)               -- DCM pages exists at an interval             SET    @PageNum += 511232         END TRY           BEGIN CATCH             -- If error and first DCM page does not exist, all files are read             IF @PageNum = 6                 BREAK             ELSE                 -- If no more DCM, increase filenum and start over                 SELECT  @FileNum += 1,                         @PageNum = 6         END CATCH     END   -- Delete all records not related to diff information DELETE FROM    @Sample WHERE   Col1 NOT LIKE 'DIFF%'   -- Split the range UPDATE  @Sample SET     Col5 = PARSENAME(REPLACE(Col3, ' - ', '.'), 1),         Col3 = PARSENAME(REPLACE(Col3, ' - ', '.'), 2)   -- Remove last paranthesis UPDATE  @Sample SET     Col3 = RTRIM(REPLACE(Col3, ')', '')),         Col5 = RTRIM(REPLACE(Col5, ')', ''))   -- Remove initial information about filenum UPDATE  @Sample SET     Col3 = SUBSTRING(Col3, CHARINDEX(':', Col3) + 1, 8000),         Col5 = SUBSTRING(Col5, CHARINDEX(':', Col5) + 1, 8000)   -- Prepare data outtake ;WITH cteSource(Changed, [PageCount]) AS (     SELECT      Changed,                 SUM(COALESCE(ToPage, FromPage) - FromPage + 1) AS [PageCount]     FROM        (                     SELECT CAST(Col3 AS INT) AS FromPage,                             CAST(NULLIF(Col5, '') AS INT) AS ToPage,                             LTRIM(Col4) AS Changed                     FROM    @Sample                 ) AS d     GROUP BY    Changed     WITH ROLLUP ) -- Present the final result SELECT  COALESCE(Changed, 'TOTAL PAGES') AS Changed,         [PageCount],         100.E * [PageCount] / SUM(CASE WHEN Changed IS NULL THEN 0 ELSE [PageCount] END) OVER () AS Percentage FROM    cteSource

    Read the article

  • Algorithm to figure out appointment times?

    - by Rachel
    I have a weird situation where a client would like a script that automatically sets up thousands of appointments over several days. The tricky part is the appointments are for a variety of US time zones, and I need to take the consumer's local time zone into account when generating appointment dates and times for each record. Appointment Rules: Appointments should be set from 8AM to 8PM Eastern Standard Time, with breaks from 12P-2P and 4P-6P. This leaves a total of 8 hours per day available for setting appointments. Appointments should be scheduled 5 minutes apart. 8 hours of 5-minute intervals means 96 appointments per day. There will be 5 users at a time handling appointments. 96 appointments per day multiplied by 5 users equals 480, so the maximum number of appointments that can be set per day is 480. Now the tricky requirement: Appointments are restricted to 8am to 8pm in the consumer's local time zone. This means that the earliest time allowed for each appointment is different depending on the consumer's time zone: Eastern: 8A Central: 9A Mountain: 10A Pacific: 11A Alaska: 12P Hawaii or Undefined: 2P Arizona: 10A or 11A based on current Daylight Savings Time Assuming a data set can be several thousand records, and each record will contain a timezone value, is there an algorithm I could use to determine a Date and Time for every record that matches the rules above?

    Read the article

  • how can I fix error: hd0 out of disk?

    - by rux
    I am running Ubuntu 12.04 on a netbook - Acer AS 1410. After a download session, I restarted the computer and it said: error: hd0 out of disk. Press any key to continue... I pressed everything, but it's just frozen there. Any idea what's wrong with it and what I can do to fix it? I haven't been able to run my computer at all since it's frozen like that. Help please! I booted the live cd and ran sudo fdisk -lu into terminal, and here's what it gave me: Disk /dev/sda: 60.0 GB, 60022480896 bytes 255 heads, 63 sectors/track, 7297 cylinders, total 117231408 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x9a696263 Device Boot Start End Blocks Id System /dev/sda3 2048 117229567 58613760 5 Extended /dev/sda5 * 71647232 109039615 18696192 83 Linux /dev/sda6 109041664 117229567 4093952 82 Linux swap / Solaris /dev/sda7 4096 71645183 35820544 83 Linux Partition table entries are not in disk order I am somewhat of a beginner in this, so don't know what this means. any ideas? Thanks!

    Read the article

  • Performance issues with visibility detection and object transparency

    - by maul
    I'm working on a 3d game that has a view similar to classic isometric games (diablo, etc.). One of the things I'm trying to implement is the effect of turning walls transparent when the player walks behind them. By itself this is not a huge issue, but I'm having trouble determining which walls should be transparent exactly. I can't use a circle or square mask. There are a lot of cases where the wall piece at the same (relative) position has different visibility depending on the surrounding area. With the help of a friend I came up with this algorithm: Create a grid around the player that contains a lot of "visibility points" (my game is semi tile-based so I create one point for every tile on the grid) - the size of the square's side is close to the radius where I make objects transparent. I found 6x6 to be a good value, so that's 36 visibility points total. For every visibility point on the grid, check if that point is in the player's line of sight. For every visibility point that is in the LOS, cast a ray from the camera to that point and mark all objects the ray hits as transparent. This algorithm works - not perfectly, but only requires some tuning - however this is very slow. As you can see, it requries 36 ray casts minimum, but most of the time 60-70 depending on the position. That's simply too much for the CPU. Is there a better way to do this? I'm using Unity 3D but I'm not looking for an engine-specific solution.

    Read the article

  • lower-case 'c' key not working in bash

    - by gavin
    This is a bit of a strange one. I'm running Ubuntu 12.04. It's been working well but today, I ran into a hell of strange phenomenon. I can no longer type a lower-case 'c' in bash. At first I thought it was a misconfiguration for the gnome terminal but I tried both a stock xterm and directly at the console (ctrl+alt+F1) and the issue was the same. I can type an upper-case C without any difficulty and I can type lower-case 'c' in any other terminal based program (vim, bash, less, etc.). The lower 'c' also works if I jump into plain old sh. I looked at all the configuration files I know of and haven't found anything incriminating in there. I suspect it's not going to be that simple anyway because if I run bash with the '--norc' option from within sh, the problem remains. I don't know what else to check. In fact, if I wanted to cause this problem on a given machine, I have no idea how it could be done. Total mystery.

    Read the article

  • mdadm starts resync on every boot

    - by Anteru
    Since a few days (and I'm positive it started shortly before I updated my server from 13.04-13.10) my mdadm is resyncing on every boot. In the syslog, I get the following output [ 0.809256] md: linear personality registered for level -1 [ 0.811412] md: multipath personality registered for level -4 [ 0.813153] md: raid0 personality registered for level 0 [ 0.815201] md: raid1 personality registered for level 1 [ 1.101517] md: raid6 personality registered for level 6 [ 1.101520] md: raid5 personality registered for level 5 [ 1.101522] md: raid4 personality registered for level 4 [ 1.106825] md: raid10 personality registered for level 10 [ 1.935882] md: bind<sdc1> [ 1.943367] md: bind<sdb1> [ 1.945199] md/raid1:md0: not clean -- starting background reconstruction [ 1.945204] md/raid1:md0: active with 2 out of 2 mirrors [ 1.945225] md0: detected capacity change from 0 to 2000396680192 [ 1.945351] md: resync of RAID array md0 [ 1.945357] md: minimum _guaranteed_ speed: 1000 KB/sec/disk. [ 1.945359] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync. [ 1.945362] md: using 128k window, over a total of 1953512383k. [ 2.220468] md0: unknown partition table I'm not sure what's up with that detected capacity change, looking at some old logs, this does have appeared earlier as well without a resync right afterwards. In fact, I let it run yesterday until completion and rebooted, and then it wouldn't resync, but today it does resync again. For instance, yesterday I got: [ 1.872123] md: bind<sdc1> [ 1.950946] md: bind<sdb1> [ 1.952782] md/raid1:md0: active with 2 out of 2 mirrors [ 1.952807] md0: detected capacity change from 0 to 2000396680192 [ 1.954598] md0: unknown partition table So it seems to be a problem that the RAID array does not get marked as clean after every shutdown? How can I troubleshoot this? The disks themselves are both fine, SMART tells me no errors, everything ok.

    Read the article

  • Data Aggregation of CSV files java

    - by royB
    I have k csv files (5 csv files for example), each file has m fields which produce a key and n values. I need to produce a single csv file with aggregated data. I'm looking for the most efficient solution for this problem, speed mainly. I don't think by the way that we will have memory issues. Also I would like to know if hashing is really a good solution because we will have to use 64 bit hashing solution to reduce the chance for a collision to less than 1% (we are having around 30000000 rows per aggregation). For example file 1: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,50,60,70,80 a3,b2,c4,60,60,80,90 file 2: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,30,50,90,40 a3,b2,c4,30,70,50,90 result: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,80,110,160,120 a3,b2,c4,90,130,130,180 algorithm that we thought until now: hashing (using concurentHashTable) merge sorting the files DB: using mysql or hadoop or redis. The solution needs to be able to handle Huge amount of data (each file more than two million rows) a better example: file 1 country,city,peopleNum england,london,1000000 england,coventry,500000 file 2: country,city,peopleNum england,london,500000 england,coventry,500000 england,manchester,500000 merged file: country,city,peopleNum england,london,1500000 england,coventry,1000000 england,manchester,500000 The key is: country,city. This is just an example, my real key is of size 6 and the data columns are of size 8 - total of 14 columns. We would like that the solution will be the fastest in regard of data processing.

    Read the article

  • GParted in UBUNTU shows entire disk as UNALLOCATED SPACE

    - by msPeachy
    Good day to everyone. I hope someone can help me with my problem. I have a dual boot Windows and Ubuntu system. I recently encountered an hd0 out of disk error and wasn't able to boot Ubuntu. So I booted into Windows, after 2 to 3 times of booting and rebooting Windows, I tried booting Ubuntu but still I get the hd0 out of disk error. I decided to run Ubuntu from LIVEUSB to try to fix my Ubuntu partition using GParted, but when I run GParted, it shows my entire disk as UNALLOCATED SPACE! The strange thing is that Nautilus still shows and mounts my partitions. Also every time I boot into Windows , my partitions exists and I am able to read and write to them. I have no idea what is wrong. Please help! I can't stand using Windows since most of the tools I use are in Ubuntu. I don't mind reinstalling Ubuntu. In fact I already tried reinstalling using the LIVEUSB but I wasn't able to, since GParted or the Ubuntu installer itself does not recognized my partitions and shows the entire disk as unallocated space. I am currently running Ubuntu from LIVEUSB. Here's the outpuf of sudo fdisk -l Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xb30ab30a Device Boot Start End Blocks Id System /dev/sda1 * 2048 104869887 52433920 83 Linux /dev/sda2 104869888 105074687 102400 7 HPFS/NTFS/exFAT /dev/sda3 105074688 156149759 25537536 7 HPFS/NTFS/exFAT /dev/sda4 156151800 625153409 234500805 f W95 Ext'd (LBA) /dev/sda5 156151808 169156591 6502392 82 Linux swap / Solaris /dev/sda6 169158656 294991871 62916608 7 HPFS/NTFS/exFAT /dev/sda7 294993920 471037944 88022012+ 7 HPFS/NTFS/exFAT /dev/sda8 471041928 625121152 77039612+ 7 HPFS/NTFS/exFAT When I run, sudo parted -l, I got this error message: ubuntu@ubuntu:~$ sudo parted -l Error: Can't have a partition outside the disk!

    Read the article

  • Page Titles - Including gender of a fashion product in page titles?

    - by Cedric
    I need a bit of help to decide whether it is worth including gender in page titles. In the webmaster tools: I looked at our search queries that include "women", and they account for 9% of our total search queries for the site. I am wondering if it is the right way assess the benefit of including "woman" or "men" in page titles, looking at it with existing results pointing to us already? Is there another tool that I can check the actual queries that may not include us in search results? Like google insights maybe? http://www.google.com/insights/search/#q=shoes%2Cshoes%20for%20women&cmpt=q So it looks like 1.1% of searches for "shoes" are also "shoes for women" is that correct? As a direct comparison, doing the same analysis on our own search queries, I get 1.8% when comparing "shoes for women" to "shoes" Implementing this automation would probably affect 99% of our site if not more, splitting it in 2 segments (one portion of page titles including "women" and the other including "men") Will doing so create a massively repetitive keyword throughout the site, hurting SEO? http://support.google.com/webmasters/bin/answer.py?hl=en&answer=35624 (see "Avoid repeated or boilerplate titles.")

    Read the article

  • Rawr Code Clone Analysis&ndash;Part 0

    - by Dylan Smith
    Code Clone Analysis is a cool new feature in Visual Studio 11 (vNext).  It analyzes all the code in your solution and attempts to identify blocks of code that are similar, and thus candidates for refactoring to eliminate the duplication.  The power lies in the fact that the blocks of code don't need to be identical for Code Clone to identify them, it will report Exact, Strong, Medium and Weak matches indicating how similar the blocks of code in question are.   People that know me know that I'm anal enthusiastic about both writing clean code, and taking old crappy code and making it suck less. So the possibilities for this feature have me pretty excited if it works well - and thats a big if that I'm hoping to explore over the next few blog posts. I'm going to grab the Rawr source code from CodePlex (a World Of Warcraft gear calculator engine program), run Code Clone Analysis against it, then go through the results one-by-one and refactor where appropriate blogging along the way.  My goals with this blog series are twofold: Evaluate and demonstrate Code Clone Analysis Provide some concrete examples of refactoring code to eliminate duplication and improve the code-base Here are the initial results:   Code Clone Analysis has found: 129 Exact Matches 201 Strong Matches 300 Medium Matches 193 Weak Matches Also indicated is that there was a total of 45,181 potentially duplicated lines of code that could be eliminated through refactoring.  Considering the entire solution only has 109,763 lines of code, if true, the duplicates lines of code number is pretty significant. In the next post we’ll start examining some of the individual results and determine if they really do indicate a potential refactoring.

    Read the article

  • Styles of games that work at low-resolution

    - by Brendan Long
    I'm taking a class on compilers, and the goal is to write a compiler for Meggy Jr devices (Arduino). The goal is just to make a simple compilers with loops and variables and stuff. Obviously, that's lame, so the "real goal" is to make an impressive game on the device. The problem is that it only has 64 pixels to work with (technically 72, but the top 8 are single-color and not part of the main display, so they're really only useful for displaying things like money). My problem is thinking of something to do on a device that small. It doesn't really matter if it's original, but it can't be something that's already available. My first idea was "snake", but that comes with the SDK. Same with a side-scrolling shooter. Remaining ideas include a tower defense game (hard to write, hard to control), an RPG (same), tetris (lame).. The problem is that all of the games I like require a high-resolution screen because they have a lot of text. Even a really simple game like nethack would be hard because each creature would be a single color. tl;dr What styles of games require a. No text; and b. Few enough objects that representing them each with a single color is acceptable? EDIT: To clarify, the display is 8x8 for a total of 64 pixels, not 64x64.

    Read the article

  • grub shows same linux image twice

    - by binW
    After a recent update, I get multiple entries for same linux kernel version in the boot menu. I have tried running update-grub2 but it also lists the same linux-image version twice i.e adnan@adnan-laptop:/boot$ sudo update-grub2 Generating grub.cfg ... Found linux image: /boot/vmlinuz-2.6.32-26-generic Found initrd image: /boot/initrd.img-2.6.32-26-generic Found Windows 7 (loader) on /dev/sda1 Found linux image: /boot/vmlinuz-2.6.32-26-generic Found initrd image: /boot/initrd.img-2.6.32-26-generic Found memtest86+ image: /boot/memtest86+.bin done As you can see vmlinuz and initrd are found multiple times. But there is only one vmlinuz and initrd file in /boot adnan@adnan-laptop:/boot$ ls -l total 15120 -rw-r--r-- 1 root root 646144 2010-11-24 15:58 abi-2.6.32-26-generic -rw-r--r-- 1 root root 110601 2010-11-24 15:58 config-2.6.32-26-generic drwxr-xr-x 3 root root 4096 2011-01-01 18:59 grub -rw-r--r-- 1 root root 8335528 2010-12-20 23:36 initrd.img-2.6.32-26-generic -rw-r--r-- 1 root root 160280 2010-03-23 14:40 memtest86+.bin -rw-r--r-- 1 root root 2156100 2010-11-24 15:58 System.map-2.6.32-26-generic -rw-r--r-- 1 root root 1336 2010-11-24 16:00 vmcoreinfo-2.6.32-26-generic -rw-r--r-- 1 root root 4050080 2010-11-24 15:58 vmlinuz-2.6.32-26-generic Can some one tell me why does update-grub2 finds vmlinuz and initrd twice? and how to stop this from happening.

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >