Search Results

Search found 2728 results on 110 pages for 'volume'.

Page 49/110 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Windows DD-esque implementation in Qt 5

    - by user1777667
    I'm trying to get into Qt and as a project I want to try and pull a binary image from a hard drive in Windows. This is what I have: QFile dsk("//./PhysicalDrive1"); dsk.open(QIODevice::ReadOnly); QByteArray readBytes = dsk.read(512); dsk.close(); QFile img("C:/out.bin"); img.open(QIODevice::WriteOnly); img.write(readBytes); img.close(); When I run it, it creates the file, but when I view it in a hex editor, it says: ëR.NTFS ..........ø..?.ÿ.€.......€.€.ÿç......UT..............ö.......ì§á.Íá.`....ú3ÀŽÐ¼.|ûhÀ...hf.ˈ...f.>..NTFSu.´A»ªUÍ.r..ûUªu.÷Á..u.éÝ..ƒì.h..´HŠ...‹ô..Í.ŸƒÄ.žX.rá;...uÛ£..Á.....Z3Û¹. +Èfÿ.......ŽÂÿ...èK.+Èwï¸.»Í.f#Àu-f.ûTCPAu$.ù..r..h.».hp..h..fSfSfU...h¸.fa..Í.3À¿(.¹Ø.üóªé_...f`..f¡..f.....fh....fP.Sh..h..´BŠ.....‹ôÍ.fY[ZfYfY..‚..fÿ.......ŽÂÿ...u¼..faàø.è.. û.è..ôëý´.‹ð¬<.t.´.»..Í.ëòÃ..A disk read error occurred...BOOTMGR is missing...BOOTMGR is compressed...Press Ctrl+Alt+Del to restart...Œ©¾Ö..Uª Is there a better way of doing this? I tried running it as admin, but still no dice. Any help would be much appreciated. Update: I changed the code a bit. Now if I specify a dd images as the input it writes the image perfectly, but when I try to use a disk as the input it only writes 0x00. QTextStream(stdout) << "Start\n"; QFile dsk("//./d:"); //QFile dsk("//./PhysicalDrive1"); //QFile dsk("//?/Device/Harddisk1/Partition0"); //QFile dsk("//./Volume{e988ffc3-3512-11e3-99d8-806e6f6e6963}"); //QFile dsk("//./Volume{04bbc7e2-a450-11e3-a9d9-902b34d5484f}"); //QFile dsk("C:/out_2.bin"); if (!dsk.open(QIODevice::ReadOnly)) qDebug() << "Failed to open:" << dsk.errorString(); qDebug() << "Size:" << dsk.size() << "\n"; // Blank out image file QFile img(dst); img.open(QIODevice::WriteOnly); img.write(0); img.close(); // Read and write operations while (!dsk.atEnd()) { QByteArray readBytes = dsk.readLine(); qDebug() << "Reading: " << readBytes.size() << "\n"; QFile img(dst); if (!img.open(QIODevice::Append)) qDebug() << "Failed to open:" << img.errorString(); img.write(readBytes); } QTextStream(stdout) << "End\n"; I guess the real problem I'm having is how to open a volume with QFile in Windows. I tried a few variants, but to no avail.

    Read the article

  • Windows CE: Using IOCTL_DISK_GET_STORAGEID

    - by Bruce Eitman
    A customer approached me recently to ask if I had any code that demonstrated how to use STORAGE_IDENTIFICATION, which is the data structure used to get the Storage ID from a disk. I didn’t have anything, which of course sends me off writing code and blogging about it. Simple enough, right? Go read the documentation for STORAGE_IDENTIFICATION which lead me to IOCTL_DISK_GET_STORAGEID. Except that the documentation for IOCTL_DISK_GET_STORAGEID seems to have a problem.   The most obvious problem is that it shows how to call CreateFile() to get the handle to use with DeviceIoControl(), but doesn’t show how to call DeviceIoControl(). That is odd, but not really a problem. But, the call to CreateFile() seems to be wrong, or at least it was in my testing. The documentation shows the call to be: hVolume = CreateFile(TEXT("\Storage Card\Vol:"), GENERIC_READ|GENERIC_WRITE, 0, NULL, OPEN_EXISTING, 0, NULL); I tried that, but my testing with an SD card mounted as Storage Card failed on the call to CreateFile(). I tried several variations of this, but none worked. Then I remembered that some time ago I wrote an article about enumerating the disks (Windows CE: Displaying Disk Information). I pulled up that code and tried again with both the disk device name and the partition volume name. The disk device name worked. The device names are DSKx:, where x is the disk number. I created the following function to output the Manufacturer ID and Serial Number returned from IOCTL_DISK_GET_STORAGEID:   #include "windows.h" #include "Diskio.h"     BOOL DisplayDiskID( TCHAR *Disk ) {                 STORAGE_IDENTIFICATION *StoreID = NULL;                 STORAGE_IDENTIFICATION GetSizeStoreID;                 DWORD dwSize;                 HANDLE hVol;                 TCHAR VolumeName[MAX_PATH];                 TCHAR *ManfID;                 TCHAR *SerialNumber;                 BOOL RetVal = FALSE;                 DWORD GLE;                   // Note that either of the following works                 //_stprintf(VolumeName, _T("\\%s\\Vol:"), Disk);                 _stprintf(VolumeName, _T("\\%s"), Disk);                   hVol = CreateFile( Disk, GENERIC_READ|GENERIC_WRITE, 0, NULL, OPEN_EXISTING, 0, NULL);                   if( hVol != INVALID_HANDLE_VALUE )                 {                                 if(DeviceIoControl(hVol, IOCTL_DISK_GET_STORAGEID, (LPVOID)NULL, 0, &GetSizeStoreID, sizeof(STORAGE_IDENTIFICATION), &dwSize, NULL) == FALSE)                                 {                                                 GLE = GetLastError();                                                 if( GLE == ERROR_INSUFFICIENT_BUFFER )                                                 {                                                                 StoreID = (STORAGE_IDENTIFICATION *)malloc( GetSizeStoreID.dwSize );                                                                 if(DeviceIoControl(hVol, IOCTL_DISK_GET_STORAGEID, (LPVOID)NULL, 0, StoreID, GetSizeStoreID.dwSize, &dwSize, NULL) != FALSE)                                                                 {                                                                                 RETAILMSG( 1, (TEXT("DisplayDiskID: Flags %X\r\n"), StoreID->dwFlags ));                                                                                 if( !(StoreID->dwFlags & MANUFACTUREID_INVALID) )                                                                                 {                                                                                                 ManfID = (TCHAR *)((DWORD)StoreID + StoreID->dwManufactureIDOffset);                                                                                                 RETAILMSG( 1, (TEXT("DisplayDiskID: Manufacture ID %s\r\n"), ManfID ));                                                                                 }                                                                                 if( !(StoreID->dwFlags & SERIALNUM_INVALID) )                                                                                 {                                                                                                 SerialNumber = (TCHAR *)((DWORD)StoreID + StoreID->dwSerialNumOffset);                                                                                                 RETAILMSG( 1, (TEXT("DisplayDiskID: Serial Number %s\r\n"), SerialNumber ));                                                                                 }                                                                                 RetVal = TRUE;                                                                 }                                                                 else                                                                                 RETAILMSG( 1, (TEXT("DisplayDiskID: DeviceIoControl failed (%d)\r\n"), GLE));                                                                                                                                                 free(StoreID);                                                 }                                                 else                                                                 RETAILMSG( 1, (TEXT("No Disk Identifcation available for %s\r\n"), VolumeName ));                                 }                                 else                                                 RETAILMSG( 1, (TEXT("DisplayDiskID: DeviceIoControl succeeded (and shouldn't have)\r\n")));                                                                                 CloseHandle (hVol);                 }                 else                                 RETAILMSG( 1, (TEXT("DisplayDiskID: Failed to open volume (%s)\r\n"), VolumeName ));                   return RetVal; } Further testing showed that both \DSKx: and \DSKx:\Vol: work when calling CreateFile();   Copyright © 2010 – Bruce Eitman All Rights Reserved

    Read the article

  • Clustering for Mere Mortals (Pt2)

    - by Geoff N. Hiten
    Planning. I could stop there and let that be the entirety post #2 in this series.  Planning is the single most important element in building a cluster and the Laptop Demo Cluster is no exception.  One of the more awkward parts of actually creating a cluster is coordinating information between Windows Clustering and SQL Clustering.  The dialog boxes show up hours apart, but still have to have matching and consistent information. Excel seems to be a good tool for tracking these settings.  My workbook has four pages: Systems, Storage, Network, and Service Accounts.  The systems page looks like this:   Name Role Software Location East Physical Cluster Node 1 Windows Server 2008 R2 Enterprise Laptop VM West Physical Cluster Node 2 Windows Server 2008 R2 Enterprise Laptop VM North Physical Cluster Node 3 (Future Reserved) Windows Server 2008 R2 Enterprise Laptop VM MicroCluster Cluster Management Interface N/A Laptop VM SQL01 High-Performance High-Security Instance SQL Server 2008 Enterprise Edition x64 SP1 Laptop VM SQL02 High-Performance Standard-Security Instance SQL Server 2008 Enterprise Edition x64 SP1 Laptop VM SQL03 Standard-Performance High-Security Instance SQL Server 2008 Enterprise Edition x64 SP1 Laptop VM Note that everything that has a computer name is listed here, whether physical or virtual. Storage looks like this: Storage Name Instance Purpose Volume Path Size (GB) LUN ID Speed Quorum MicroCluster Cluster Quorum Quorum Q: 2     SQL01Anchor SQL01 Instance Anchor SQL01Anchor L: 2     SQL02Anchor SQL02 Instance Anchor SQL02Anchor M: 2     SQL01Data1 SQL01 SQL Data SQL01Data1 L:\MountPoints\SQL01Data1 2     SQL02Data1 SQL02 SQL Data SQL02Data1 M:\MountPoints\SQL02Data1       Starting at the left is the name used in the storage array.  It is important to rename resources at each level, whether it is Storage, LUN, Volume, or disk folder.  Otherwise, troubleshooting things gets complex and difficult.  You want to be able to glance at a resource at any level and see where it comes from and what it is connected to. Networking is the same way:   System Network VLAN  IP Subnet Mask Gateway DNS1 DNS2 East Public Cluster1 10.97.230.x(DHCP) 255.255.255.0 10.97.230.1 10.97.230.1 10.97.230.1 East Heartbeat Cluster2   255.255.255.0       West Public Cluster1 10.97.230.x(DHCP) 255.255.255.0 10.97.230.1 10.97.230.1 10.97.230.1 West Heartbeat Cluster2   255.255.255.0       North Public Cluster1 10.97.230.x(DHCP) 255.255.255.0 10.97.230.1 10.97.230.1 10.97.230.1 North Heartbeat Cluster2   255.255.255.0       SQL01 Public Cluster1 10.97.230.x(DHCP) 255.255.255.0       SQL02 Public Cluster1 10.97.230.x(DHCP) 255.255.255.0       One hallmark of a poorly planned and implemented cluster is a bunch of "Local Network Connection #n" entries in the network settings page.  That lets me know that somebody didn't care about the long-term supportabaility of the cluster.  This can be critically important with Hyper-V Clusters and their high NIC counts.  Final page:   Instance Service Name Account Password Domain OU SQL01 SQL Server SVCSQL01 Baseline22 MicroAD Service Accounts SQL01 SQL Agent SVCSQL01 Baseline22 MicroAD Service Accounts SQL02 SQL Server SVC_SQL02 Baseline22 MicroAD Service Accounts SQL02 SQL Agent SVC_SQL02 Baseline22 MicroAD Service Accounts SQL03 (Future) SQL Server SVC_SQL03 Baseline22 MicroAD Service Accounts SQL03 (Future) SQL Agent SVC_SQL03 Baseline22 MicroAD Service Accounts             Installation Account           administrator            Yes.  I write down the account information.  I secure the file via NTFS, but I don't want to fumble around looking for passwords when it comes time to rebuild a node. Always fill out the workbook COMPLETELY before installing anything.  The whole point is to have everything you need at your fingertips before you begin.  The install experience is so much better and more productive with this information in place.

    Read the article

  • Big Data: Size isn’t everything

    - by Simon Elliston Ball
    Big Data has a big problem; it’s the word “Big”. These days, a quick Google search will uncover terabytes of negative opinion about the futility of relying on huge volumes of data to produce magical, meaningful insight. There are also many clichéd but correct assertions about the difficulties of correlation versus causation, in massive data sets. In reading some of these pieces, I begin to understand how climatologists must feel when people complain ironically about “global warming” during snowfall. Big Data has a name problem. There is a lot more to it than size. Shape, Speed, and…err…Veracity are also key elements (now I understand why Gartner and the gang went with V’s instead of S’s). The need to handle data of different shapes (Variety) is not new. Data developers have always had to mold strange-shaped data into our reporting systems, integrating with semi-structured sources, and even straying into full-text searching. However, what we lacked was an easy way to add semi-structured and unstructured data to our arsenal. New “Big Data” tools such as MongoDB, and other NoSQL (Not Only SQL) databases, or a graph database like Neo4J, fill this gap. Still, to many, they simply introduce noise to the clean signal that is their sensibly normalized data structures. What about speed (Velocity)? It’s not just high frequency trading that generates data faster than a single system can handle. Many other applications need to make trade-offs that traditional databases won’t, in order to cope with high data insert speeds, or to extract quickly the required information from data streams. Unfortunately, many people equate Big Data with the Hadoop platform, whose batch driven queries and job processing queues have little to do with “velocity”. StreamInsight, Esper and Tibco BusinessEvents are examples of Big Data tools designed to handle high-velocity data streams. Again, the name doesn’t do the discipline of Big Data any favors. Ultimately, though, does analyzing fast moving data produce insights as useful as the ones we get through a more considered approach, enabled by traditional BI? Finally, we have Veracity and Value. In many ways, these additions to the classic Volume, Velocity and Variety trio acknowledge the criticism that without high-quality data and genuinely valuable outputs then data, big or otherwise, is worthless. As a discipline, Big Data has recognized this, and data quality and cleaning tools are starting to appear to support it. Rather than simply decrying the irrelevance of Volume, we need as a profession to focus how to improve Veracity and Value. Perhaps we should just declare the ‘Big’ silent, embrace these new data tools and help develop better practices for their use, just as we did the good old RDBMS? What does Big Data mean to you? Which V gives your business the most pain, or the most value? Do you see these new tools as a useful addition to the BI toolbox, or are they just enabling a dangerous trend to find ghosts in the noise?

    Read the article

  • Speed up SQL Server queries with PREFETCH

    - by Akshay Deep Lamba
    Problem The SAN data volume has a throughput capacity of 400MB/sec; however my query is still running slow and it is waiting on I/O (PAGEIOLATCH_SH). Windows Performance Monitor shows data volume speed of 4MB/sec. Where is the problem and how can I find the problem? Solution This is another summary of a great article published by R. Meyyappan at www.sqlworkshops.com.  In my opinion, this is the first article that highlights and explains with working examples how PREFETCH determines the performance of a Nested Loop join.  First of all, I just want to recall that Prefetch is a mechanism with which SQL Server can fire up many I/O requests in parallel for a Nested Loop join. When SQL Server executes a Nested Loop join, it may or may not enable Prefetch accordingly to the number of rows in the outer table. If the number of rows in the outer table is greater than 25 then SQL will enable and use Prefetch to speed up query performance, but it will not if it is less than 25 rows. In this section we are going to see different scenarios where prefetch is automatically enabled or disabled. These examples only use two tables RegionalOrder and Orders.  If you want to create the sample tables and sample data, please visit this site www.sqlworkshops.com. The breakdown of the data in the RegionalOrders table is shown below and the Orders table contains about 6 million rows. In this first example, I am creating a stored procedure against two tables and then execute the stored procedure.  Before running the stored proceudre, I am going to include the actual execution plan. --Example provided by www.sqlworkshops.com --Create procedure that pulls orders based on City --Do not forget to include the actual execution plan CREATE PROC RegionalOrdersProc @City CHAR(20) AS BEGIN DECLARE @OrderID INT, @OrderDetails CHAR(200) SELECT @OrderID = o.OrderID, @OrderDetails = o.OrderDetails       FROM RegionalOrders ao INNER JOIN Orders o ON (o.OrderID = ao.OrderID)       WHERE City = @City END GO SET STATISTICS time ON GO --Example provided by www.sqlworkshops.com --Execute the procedure with parameter SmallCity1 EXEC RegionalOrdersProc 'SmallCity1' GO After running the stored procedure, if we right click on the Clustered Index Scan and click Properties we can see the Estimated Numbers of Rows is 24.    If we right click on Nested Loops and click Properties we do not see Prefetch, because it is disabled. This behavior was expected, because the number of rows containing the value ‘SmallCity1’ in the outer table is less than 25.   Now, if I run the same procedure with parameter ‘BigCity’ will Prefetch be enabled? --Example provided by www.sqlworkshops.com --Execute the procedure with parameter BigCity --We are using cached plan EXEC RegionalOrdersProc 'BigCity' GO As we can see from the below screenshot, prefetch is not enabled and the query takes around 7 seconds to execute. This is because the query used the cached plan from ‘SmallCity1’ that had prefetch disabled. Please note that even if we have 999 rows for ‘BigCity’ the Estimated Numbers of Rows is still 24.   Finally, let’s clear the procedure cache to trigger a new optimization and execute the procedure again. DBCC freeproccache GO EXEC RegionalOrdersProc 'BigCity' GO This time, our procedure runs under a second, Prefetch is enabled and the Estimated Number of Rows is 999.   The RegionalOrdersProc can be optimized by using the below example where we are using an optimizer hint. I have also shown some other hints that could be used as well. --Example provided by www.sqlworkshops.com --You can fix the issue by using any of the following --hints --Create procedure that pulls orders based on City DROP PROC RegionalOrdersProc GO CREATE PROC RegionalOrdersProc @City CHAR(20) AS BEGIN DECLARE @OrderID INT, @OrderDetails CHAR(200) SELECT @OrderID = o.OrderID, @OrderDetails = o.OrderDetails       FROM RegionalOrders ao INNER JOIN Orders o ON (o.OrderID = ao.OrderID)       WHERE City = @City       --Hinting optimizer to use SmallCity2 for estimation       OPTION (optimize FOR (@City = 'SmallCity2'))       --Hinting optimizer to estimate for the currnet parameters       --option (recompile)       --Hinting optimize not to use histogram rather       --density for estimation (average of all 3 cities)       --option (optimize for (@City UNKNOWN))       --option (optimize for UNKNOWN) END GO Conclusion, this tip was mainly aimed at illustrating how Prefetch can speed up query execution and how the different number of rows can trigger this.

    Read the article

  • SQL SERVER – Quiz and Video – Introduction to Hierarchical Query using a Recursive CTE

    - by pinaldave
    This blog post is inspired from SQL Queries Joes 2 Pros: SQL Query Techniques For Microsoft SQL Server 2008 – SQL Exam Prep Series 70-433 – Volume 2.[Amazon] | [Flipkart] | [Kindle] | [IndiaPlaza] This is follow up blog post of my earlier blog post on the same subject - SQL SERVER – Introduction to Hierarchical Query using a Recursive CTE – A Primer. In the article we discussed various basics terminology of the CTE. The article further covers following important concepts of common table expression. What is a Common Table Expression (CTE) Building a Recursive CTE Identify the Anchor and Recursive Query Add the Anchor and Recursive query to a CTE Add an expression to track hierarchical level Add a self-referencing INNER JOIN statement Above six are the most important concepts related to CTE and SQL Server.  There are many more things one has to learn but without beginners fundamentals one can’t learn the advanced  concepts. Let us have small quiz and check how many of you get the fundamentals right. Quiz 1) You have an employee table with the following data. EmpID FirstName LastName MgrID 1 David Kennson 11 2 Eric Bender 11 3 Lisa Kendall 4 4 David Lonning 11 5 John Marshbank 4 6 James Newton 3 7 Sally Smith NULL You need to write a recursive CTE that shows the EmpID, FirstName, LastName, MgrID, and employee level. The CEO should be listed at Level 1. All people who work for the CEO will be listed at Level 2. All of the people who work for those people will be listed at Level 3. Which CTE code will achieve this result? WITH EmpList AS (SELECT Boss.EmpID, Boss.FName, Boss.LName, Boss.MgrID, 1 AS Lvl FROM Employee AS Boss WHERE Boss.MgrID IS NULL UNION ALL SELECT E.EmpID, E.FirstName, E.LastName, E.MgrID, EmpList.Lvl + 1 FROM Employee AS E INNER JOIN EmpList ON E.MgrID = EmpList.EmpID) SELECT * FROM EmpList WITH EmpListAS (SELECT EmpID, FirstName, LastName, MgrID, 1 as Lvl FROM Employee WHERE MgrID IS NULL UNION ALL SELECT EmpID, FirstName, LastName, MgrID, 2 as Lvl ) SELECT * FROM BossList WITH EmpList AS (SELECT EmpID, FirstName, LastName, MgrID, 1 as Lvl FROM Employee WHERE MgrID is NOT NULL UNION SELECT EmpID, FirstName, LastName, MgrID, BossList.Lvl + 1 FROM Employee INNER JOIN EmpList BossList ON Employee.MgrID = BossList.EmpID) SELECT * FROM EmpList 2) You have a table named Employee. The EmployeeID of each employee’s manager is in the ManagerID column. You need to write a recursive query that produces a list of employees and their manager. The query must also include the employee’s level in the hierarchy. You write the following code segment: WITH EmployeeList (EmployeeID, FullName, ManagerName, Level) AS ( –PICK ANSWER CODE HERE ) SELECT EmployeeID, FullName, ” AS [ManagerID], 1 AS [Level] FROM Employee WHERE ManagerID IS NULL UNION ALL SELECT emp.EmployeeID, emp.FullName mgr.FullName, 1 + 1 AS [Level] FROM Employee emp JOIN Employee mgr ON emp.ManagerID = mgr.EmployeeId SELECT EmployeeID, FullName, ” AS [ManagerID], 1 AS [Level] FROM Employee WHERE ManagerID IS NULL UNION ALL SELECT emp.EmployeeID, emp.FullName, mgr.FullName, mgr.Level + 1 FROM EmployeeList mgr JOIN Employee emp ON emp.ManagerID = mgr.EmployeeId Now make sure that you write down all the answers on the piece of paper. Watch following video and read earlier article over here. If you want to change the answer you still have chance. Solution 1) 1 2) 2 Now compare let us check the answers and compare your answers to following answers. I am very confident you will get them correct. Available at USA: Amazon India: Flipkart | IndiaPlaza Volume: 1, 2, 3, 4, 5 Please leave your feedback in the comment area for the quiz and video. Did you know all the answers of the quiz? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Big Data – Operational Databases Supporting Big Data – Key-Value Pair Databases and Document Databases – Day 13 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Relational Database and NoSQL database in the Big Data Story. In this article we will understand the role of Key-Value Pair Databases and Document Databases Supporting Big Data Story. Now we will see a few of the examples of the operational databases. Relational Databases (Yesterday’s post) NoSQL Databases (Yesterday’s post) Key-Value Pair Databases (This post) Document Databases (This post) Columnar Databases (Tomorrow’s post) Graph Databases (Tomorrow’s post) Spatial Databases (Tomorrow’s post) Key Value Pair Databases Key Value Pair Databases are also known as KVP databases. A key is a field name and attribute, an identifier. The content of that field is its value, the data that is being identified and stored. They have a very simple implementation of NoSQL database concepts. They do not have schema hence they are very flexible as well as scalable. The disadvantages of Key Value Pair (KVP) database are that they do not follow ACID (Atomicity, Consistency, Isolation, Durability) properties. Additionally, it will require data architects to plan for data placement, replication as well as high availability. In KVP databases the data is stored as strings. Here is a simple example of how Key Value Database will look like: Key Value Name Pinal Dave Color Blue Twitter @pinaldave Name Nupur Dave Movie The Hero As the number of users grow in Key Value Pair databases it starts getting difficult to manage the entire database. As there is no specific schema or rules associated with the database, there are chances that database grows exponentially as well. It is very crucial to select the right Key Value Pair Database which offers an additional set of tools to manage the data and provides finer control over various business aspects of the same. Riak Rick is one of the most popular Key Value Database. It is known for its scalability and performance in high volume and velocity database. Additionally, it implements a mechanism for collection key and values which further helps to build manageable system. We will further discuss Riak in future blog posts. Key Value Databases are a good choice for social media, communities, caching layers for connecting other databases. In simpler words, whenever we required flexibility of the data storage keeping scalability in mind – KVP databases are good options to consider. Document Database There are two different kinds of document databases. 1) Full document Content (web pages, word docs etc) and 2) Storing Document Components for storage. The second types of the document database we are talking about over here. They use Javascript Object Notation (JSON) and Binary JSON for the structure of the documents. JSON is very easy to understand language and it is very easy to write for applications. There are two major structures of JSON used for Document Database – 1) Name Value Pairs and 2) Ordered List. MongoDB and CouchDB are two of the most popular Open Source NonRelational Document Database. MongoDB MongoDB databases are called collections. Each collection is build of documents and each document is composed of fields. MongoDB collections can be indexed for optimal performance. MongoDB ecosystem is highly available, supports query services as well as MapReduce. It is often used in high volume content management system. CouchDB CouchDB databases are composed of documents which consists fields and attachments (known as description). It supports ACID properties. The main attraction points of CouchDB are that it will continue to operate even though network connectivity is sketchy. Due to this nature CouchDB prefers local data storage. Document Database is a good choice of the database when users have to generate dynamic reports from elements which are changing very frequently. A good example of document usages is in real time analytics in social networking or content management system. Tomorrow In tomorrow’s blog post we will discuss about various other Operational Databases supporting Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Oracle OpenWorld Update: Oracle GoldenGate for High Availability

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} One of our primary themes this year for the Oracle OpenWorld Sessions featuring Oracle GoldenGate is High Availability. This is a pretty wide theme, but the focus will be on ways of maximizing uptime for critical systems during planned and unplanned events. We have a number of very informative sessions dedicated to exploring this theme in detail; from deep product implementation strategies up to lessons learned by our customers when using Oracle GoldenGate to meet strict SLAs. We kick this track off with our Customer Panel on Zero Downtime Operations on Monday, which I overviewed in my last posting. This is followed by Comcast, who will be hosting a sessions at 1:45PM in Moscone West 3014. Their session will discuss using Oracle GoldenGate to reduce downtime during a database upgrade. Here’s an overview: CON8571 - Oracle Database Upgrade with Oracle GoldenGate: Best Practices from Comcast Does your business demand high availability? In this session, Comcast, among the largest telecom firms in the world, shares tips on how to achieve zero downtime while upgrading to Oracle Database 11g Release 2, using a combination of Oracle technologies: Oracle Real Application Clusters (Oracle RAC), Oracle Database’s Grid Infrastructure and Oracle Recovery Manager (Oracle RMAN) features, Oracle GoldenGate, and Oracle Active Data Guard. This successful upgrade took place on a mission-critical system that handles more than 60 million business requests and service calls a day. You’ll also hear how Comcast leverages Oracle Advanced Customer Support Services, including an Oracle Solution Support Center, to maximize performance and availability of its Oracle technologies. On Tuesday, Joydip Kundu (Director of Software Development) will be presenting “Oracle GoldenGate and Oracle Data Guard: Working Together Seamlessly” at 10:15AM in Moscone South 3005. This session focuses on how both modes of Oracle GoldenGate extract (Classic and Integrated Capture) can be used with Oracle Data Guard for disaster recovery purposes or to offload extract processing. That afternoon at 1:15PM Comcast takes the stage again to discuss firsthand lessons learned implementing Oracle GoldenGate in a heterogeneous, highly available environment. Here’s a rundown of their session: CON8750 - High-Volume OLTP with Oracle GoldenGate: Best Practices from Comcast Does your business demand high availability in a mission-critical environment? In this session, Comcast, one of the largest telecom firms, shares best practices for leveraging Oracle GoldenGate to replicate high-volume online transaction processing data from Tandem NSK SQL/MX to Teradata. Hear critical success factors from Comcast for overall platform and component architectures as well as configuration and tuning techniques. Learn how it met the challenges of replication in a complex heterogeneous environment. You’ll also hear how Comcast leverages Oracle Advanced Customer Support Services, including an Oracle Solution Support Center, to provide mission-critical support for maximized performance and availability of its Oracle environment. The final session on the high availability track will be hosted by Patricia Mcelroy (Distinguished Product Manager) and Stephan Haisly (Principle Member of Technical Staff). Their session (CON8401 - Tuning and Troubleshooting Oracle GoldenGate on Oracle Database) covers techniques for performance tuning and troubleshooting of Oracle GoldenGate on Oracle Database. Using various types of workloads (OLTP, batch, Oracle’s PeopleSoft Enterprise), the presentation steps through the process of monitoring and troubleshooting the configuration to maximize performance and replication throughput within and between Oracle clouds. Join us at our sessions or stop by our demo pods in Moscone south and meet the product management and development teams.

    Read the article

  • Lubuntu upgrade to 13.04 killed sound with ALSA. How to troubleshoot?

    - by Sven
    After upgrading to 13.04 from 12.10 Lubuntu lost audio playback after unplugging usb soundcard (Polycom) and plugging it back in. Volume control was gray and leading to pulseaudio mixer (not installed) so I uninstalled the pulseaudio package. I also removed and reinstalled the alsa-base package. After restart I have the alsamixer back everything seemingly as usual(volume 100%, unmute) but every sound program gets me errors no matter what device I select. aplay -L: null Discard all samples (playback) or generate zero samples (capture) pulse PulseAudio Sound Server default:CARD=NVidia HDA NVidia, ALC662 rev1 Analog Default Audio Device sysdefault:CARD=NVidia HDA NVidia, ALC662 rev1 Analog Default Audio Device front:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog Front speakers surround40:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog 4.0 Surround output to Front and Rear speakers surround41:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog 4.1 Surround output to Front, Rear and Subwoofer speakers surround50:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog 5.0 Surround output to Front, Center and Rear speakers surround51:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog 5.1 Surround output to Front, Center, Rear and Subwoofer speakers surround71:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog 7.1 Surround output to Front, Center, Side, Rear and Woofer speakers iec958:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Digital IEC958 (S/PDIF) Digital Audio Output hdmi:CARD=NVidia,DEV=0 HDA NVidia, HDMI 0 HDMI Audio Output dmix:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog Direct sample mixing device dmix:CARD=NVidia,DEV=1 HDA NVidia, ALC662 rev1 Digital Direct sample mixing device dmix:CARD=NVidia,DEV=3 HDA NVidia, HDMI 0 Direct sample mixing device dsnoop:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog Direct sample snooping device dsnoop:CARD=NVidia,DEV=1 HDA NVidia, ALC662 rev1 Digital Direct sample snooping device dsnoop:CARD=NVidia,DEV=3 HDA NVidia, HDMI 0 Direct sample snooping device hw:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog Direct hardware device without any conversions hw:CARD=NVidia,DEV=1 HDA NVidia, ALC662 rev1 Digital Direct hardware device without any conversions hw:CARD=NVidia,DEV=3 HDA NVidia, HDMI 0 Direct hardware device without any conversions plughw:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog Hardware device with all software conversions plughw:CARD=NVidia,DEV=1 HDA NVidia, ALC662 rev1 Digital Hardware device with all software conversions plughw:CARD=NVidia,DEV=3 HDA NVidia, HDMI 0 Hardware device with all software conversions default:CARD=Communicator Default Audio Device sysdefault:CARD=Communicator Default Audio Device front:CARD=Communicator,DEV=0 Polycom Communicator, USB Audio Front speakers surround40:CARD=Communicator,DEV=0 Polycom Communicator, USB Audio 4.0 Surround output to Front and Rear speakers surround41:CARD=Communicator,DEV=0 Polycom Communicator, USB Audio 4.1 Surround output to Front, Rear and Subwoofer speakers surround50:CARD=Communicator,DEV=0 Polycom Communicator, USB Audio 5.0 Surround output to Front, Center and Rear speakers surround51:CARD=Communicator,DEV=0 Polycom Communicator, USB Audio 5.1 Surround output to Front, Center, Rear and Subwoofer speakers surround71:CARD=Communicator,DEV=0 Polycom Communicator, USB Audio 7.1 Surround output to Front, Center, Side, Rear and Woofer speakers iec958:CARD=Communicator,DEV=0 Polycom Communicator, USB Audio IEC958 (S/PDIF) Digital Audio Output dmix:CARD=Communicator,DEV=0 Polycom Communicator, USB Audio Direct sample mixing device dsnoop:CARD=Communicator,DEV=0 Polycom Communicator, USB Audio Direct sample snooping device hw:CARD=Communicator,DEV=0 Polycom Communicator, USB Audio Direct hardware device without any conversions plughw:CARD=Communicator,DEV=0 Polycom Communicator, USB Audio Hardware device with all software conversions etc/asound.conf: defaults.ctl.card 1 defaults.pcm.card 1 defaults.pcm.device 1 Following gets same result with both devices. aplay -vv -D front:CARD=NVidia,DEV=0 "Release the Pressure.wav": Playing WAVE 'Release the Pressure.wav' : Signed 16 bit Little Endian, Rate 44100 Hz, Mono aplay: set_params:1087: Channels count non available Guayadeque mp3 playback: AL lib: alsa_open_playback: Could not open playback device 'default': No such file or directory 21:32:14: Error: Gstreamer error 'Configured audiosink playbackbin is not working.' Audacious: ALSA error: snd_mixer_attach failed: No such file or directory. ALSA error: snd_pcm_open failed: No such device. So How do I fix my audio? UPDATE: I removed the usb soundcard and got rid of all alsa config. Everything is working as before the install but it sure feels fragile.

    Read the article

  • Shadows shimmer when camera moves

    - by Chad Layton
    I've implemented shadow maps in my simple block engine as an exercise. I'm using one directional light and using the view volume to create the shadow matrices. I'm experiencing some problems with the shadows shimmering when the camera moves and I'd like to know if it's an issue with my implementation or just an issue with basic/naive shadow mapping itself. Here's a video: http://www.youtube.com/watch?v=vyprATt5BBg&feature=youtu.be Here's the code I use to create the shadow matrices. The commented out code is my original attempt to perfectly fit the view frustum. You can also see my attempt to try clamping movement to texels in the shadow map which didn't seem to make any difference. Then I tried using a bounding sphere instead, also to no apparent effect. public void CreateViewProjectionTransformsToFit(Camera camera, out Matrix viewTransform, out Matrix projectionTransform, out Vector3 position) { BoundingSphere cameraViewFrustumBoundingSphere = BoundingSphere.CreateFromFrustum(camera.ViewFrustum); float lightNearPlaneDistance = 1.0f; Vector3 lookAt = cameraViewFrustumBoundingSphere.Center; float distanceFromLookAt = cameraViewFrustumBoundingSphere.Radius + lightNearPlaneDistance; Vector3 directionFromLookAt = -Direction * distanceFromLookAt; position = lookAt + directionFromLookAt; viewTransform = Matrix.CreateLookAt(position, lookAt, Vector3.Up); float lightFarPlaneDistance = distanceFromLookAt + cameraViewFrustumBoundingSphere.Radius; float diameter = cameraViewFrustumBoundingSphere.Radius * 2.0f; Matrix.CreateOrthographic(diameter, diameter, lightNearPlaneDistance, lightFarPlaneDistance, out projectionTransform); //Vector3 cameraViewFrustumCentroid = camera.ViewFrustum.GetCentroid(); //position = cameraViewFrustumCentroid - (Direction * (camera.FarPlaneDistance - camera.NearPlaneDistance)); //viewTransform = Matrix.CreateLookAt(position, cameraViewFrustumCentroid, Up); //Vector3[] cameraViewFrustumCornersWS = camera.ViewFrustum.GetCorners(); //Vector3[] cameraViewFrustumCornersLS = new Vector3[8]; //Vector3.Transform(cameraViewFrustumCornersWS, ref viewTransform, cameraViewFrustumCornersLS); //Vector3 min = cameraViewFrustumCornersLS[0]; //Vector3 max = cameraViewFrustumCornersLS[0]; //for (int i = 1; i < 8; i++) //{ // min = Vector3.Min(min, cameraViewFrustumCornersLS[i]); // max = Vector3.Max(max, cameraViewFrustumCornersLS[i]); //} //// Clamp to nearest texel //float texelSize = 1.0f / Renderer.ShadowMapSize; //min.X -= min.X % texelSize; //min.Y -= min.Y % texelSize; //min.Z -= min.Z % texelSize; //max.X -= max.X % texelSize; //max.Y -= max.Y % texelSize; //max.Z -= max.Z % texelSize; //// We just use an orthographic projection matrix. The sun is so far away that it's rays are essentially parallel. //Matrix.CreateOrthographicOffCenter(min.X, max.X, min.Y, max.Y, -max.Z, -min.Z, out projectionTransform); } And here's the relevant part of the shader: if (CastShadows) { float4 positionLightCS = mul(float4(position, 1.0f), LightViewProj); float2 texCoord = clipSpaceToScreen(positionLightCS) + 0.5f / ShadowMapSize; float shadowMapDepth = tex2D(ShadowMapSampler, texCoord).r; float distanceToLight = length(LightPosition - position); float bias = 0.2f; if (shadowMapDepth < (distanceToLight - bias)) { return float4(0.0f, 0.0f, 0.0f, 0.0f); } } The shimmer is slightly better if I drastically reduce the view volume but I think that's mostly just because the texels become smaller and it's harder to notice them flickering back and forth. I'd appreciate any insight, I'd very much like to understand what's going on before I try other techniques.

    Read the article

  • Where to store front-end data for "object calculator"

    - by Justin Grahn
    I recently have completed a language library that acts as a giant filter for food items, and flows a bit like this :Products -> Recipes -> MenuItems -> Meals and finally, upon submission, creates an Order. I have also completed a database structure that stores all the pertinent information to each class, and seems to fit my needs. The issue I'm having is linking the two. I imagined all of the information being local to each instance of the product, where there exists one backend user who edits and manipulates data, and multiple front end users who select their Meal(s) to create an Order. Ideally, all of the front end users would have all of this information stored locally within the library, and would update the library on startup from a database. How should I go about storing the data so that I can load it into the library every time the user opens the application? Do I package a database onboard and just load and populate every time? The only method I can currently conceive of doing this, even if I only have 500 possible Product objects, would require me to foreach the list for every Product that I need to match to a Recipe and so on and so forth every time I relaunch the program, which seems like a lot of wasteful loading. Here is a general flow of my architecture: Products: public class Product : IPortionable { public Product(string n, uint pNumber = 0) { name = n; productNumber = pNumber; } public string name { get; set; } public uint productNumber { get; set; } } Recipes: public Recipe(string n, decimal yieldAmt, Volume.Unit unit) { name = n; yield = new Volume(yieldAmt, unit); yield.ConvertUnit(); } /// <summary> /// Creates a new ingredient object /// </summary> /// <param name="n">Name</param> /// <param name="yieldAmt">Recipe Yield</param> /// <param name="unit">Unit of Yield</param> public Recipe(string n, decimal yieldAmt, Weight.Unit unit) { name = n; yield = new Weight(yieldAmt, unit); } public Recipe(Recipe r) { name = r.name; yield = r.yield; ingredients = r.ingredients; } public string name { get; set; } public IMeasure yield; public Dictionary<IPortionable, IMeasure> ingredients = new Dictionary<IPortionable,IMeasure>(); MenuItems: public abstract class MenuItem : IScalable { public static string title = null; public string name { get; set; } public decimal maxPortionSize { get; set; } public decimal minPortionSize { get; set; } public Dictionary<IPortionable, IMeasure> ingredients = new Dictionary<IPortionable, IMeasure>(); and Meal: public class Meal { public Meal(int guests) { guestCount = guests; } public int guestCount { get; private set; } //TODO: Make a new MainCourse class that holds pasta and Entree public Dictionary<string, int> counts = new Dictionary<string, int>(){ {MainCourse.title, 0}, {Side.title , 0}, {Appetizer.title, 0} }; public List<MenuItem> items = new List<MenuItem>(); The Database just stores and links each of these basic names and amounts together usings ID's (RecipeID, ProductID and MenuItemID)

    Read the article

  • “Apparently, you signed a software services agreement without fully understanding it.”

    - by Dave Ballantyne
    I am not a lawyer. Let me say that again, I am not a lawyer. Todays Dilbert has prompted me to post about my recent experience with SqlServer licensing. I'm in the technical realm and rarely have much to do with purchasing and licensing.  I say “I need” , budget realities will state weather I actually get.  However, I do keep my ear to the ground and due to my community involvement, I know, or at least have an understanding of, some licensing restrictions. Due to a misunderstanding, Microsoft Licensing stated that we needed licenses for our standby servers.  I knew that that was not the case,  and a quick tweet confirmed this. So after composing an email stating exactly what the machines in question were used for ie Log shipped to and used in a disaster recover scenario only,  and posting several Technet articles to back this up, we saved 2 enterprise edition licences, a not inconsiderable cost. However during this discussion, I was made aware of another ‘legalese’ document that could completely override the referenced articles, and anything I knew, or thought i knew, about SqlServer licensing. Personally, I had no knowledge of this.  The “Purchase Use Rights” agreement would appear to be the volume licensing equivalent of the “End User License Agreement” , click throughs we all know and ignore.  Here is a direct quote from Microsoft licensing, when asked for clarification. “Thanks for your email. Just to give some background on the Product Use Rights (PUR), licenses acquired through volume licensing are bound by the most recent PUR at the time of license acquisition. The link for the current PUR and PUR archive is http://www.microsoft.com/licensing/about-licensing/product-licensing.aspx. Further to this, products acquired through boxed product or pre-installed on hardware (OEM) are bound by the End User License Agreement (EULA). The PUR will explain limitations, license requirements and rulings on areas like multiplexing, virtualization, processor licensing, etc. When an article will appear on a Microsoft site or blog describing the licensing of a product, it will be using the PUR as a base. Due to the writing style or language used by the person writing areas of the website or technical blogs, the PUR is what you should use as a rule and not any of the other media. The PUR is updated quarterly and will reference every product available at that time working on the latest version unless otherwise stated. The crux of this is that the PUR is written after extensive discussions between the different branches of Microsoft (legal, technical, etc) and the wording is then approved. This is not always the case for some pages explaining licensing as they are merely intended to advise and not subject to the intense scrutiny as the PUR.” So, exactly what does that mean ? My take :  This is a living document, “updated quarterly” , though presumably this could be done on a whim and a fancy.  It could state , you are only licensed if ,that during install you stand in a corner juggling and that photographic evidence is required. A plainly ridiculous demand but,  what else could it override or new requirements could it state that change your existing understanding of the product or your legal usage of it. As i say, im not a lawyer, but are you checking the PURA prior to purchase ?

    Read the article

  • why AVAudioplayer doesn't stop/pause when viewWillDisappear?

    - by Rahul Vyas
    I am using avaudioplayer in my app. on viewwilldisappear i want to pause/stop the sound and on viewwill appear i want to play sound again.how do i do this? i'm using this code on viewWillAppear:- if(self.TickPlayer) { [self.TickPlayer play]; } if(self.TickPlayer.volume<0) { self.TickPlayer.volume=1.0; } and this on viewWillDisAppear if(self.TickPlayer) { [self.TickPlayer stop]; } here is the method which plays the sound -(void)PlayTickTickSound:(NSString*)SoundFileName { //Get the filename of the sound file: NSString *path = [NSString stringWithFormat:@"%@%@",[[NSBundle mainBundle] resourcePath],[NSString stringWithFormat:@"/%@",SoundFileName]]; //Get a URL for the sound file NSURL *filePath = [NSURL fileURLWithPath:path isDirectory:NO]; NSError *error; if(self.TickPlayer==nil) { self.TickPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:filePath error:&error]; // handle errors here. self.TickPlayer.delegate=self; [self.TickPlayer setNumberOfLoops:-1]; // repeat forever [self.TickPlayer play]; } if(self.TickPlayer) { [self.TickPlayer play]; } } and i'm calling it in method which fires every second using a timer in viewDidLoad- self.timer = [NSTimer scheduledTimerWithTimeInterval:1.0 target:self selector:@selector(updateCountdown) userInfo:nil repeats:YES]; -(void) updateCountdown { [self PlayTickTickSound:@"tick.caf"]; } i'm also playing a beep sound using avaudioplayer at a specific condition when alert appears.that works fine but this is not working as expected

    Read the article

  • a floating toolbar in WTL

    - by freefallr
    I've created a multimedia app that uses DirectShow to display multiple media streams simultaneously. The app is a WTL MDI application. For video windows, I use a CWindowImpl derived class - one per CChildFrame. I'd like to add controls to the video windows (volume ctrls etc). I'd initially thought about adding a slider (volume) control and a couple of buttons to a context menu - but later thought that this might not be the best approach. I was looking at MS Word 2007 - which has a floating toolbar that allows you to change options on highlighted text. I'd like to implement a similar floating toolbar for the video controls. I googled around a bit and found an old post about floating toolbars in WTL. The response was - for a floating toolbar, create a popup window and make it's parent the main window. I think that this sounds like a reasonable approach. my questions: Is this a good approach, or is there a more standard approach for a floating toolbar now in WTL? Should I make the toolbar a child of the video window or the CChildFrame that contains the video window, in order to ensure that it always remains on top of the video? How can I implement transparency in the floating toolbar, as in the floating toolbar in MS word?

    Read the article

  • F# and .Net versions

    - by rwallace
    I'm writing a program in F# at the moment, which I specified in the Visual Studio project setup to target .Net 3.5, this being the highest offered, on the theory that I might as well get the best available. Then I tried just now running the compiled program on an XP box, not expecting it to work, but just to see what would happen. Unsurprisingly I just got an error message demanding an appropriate version of the framework, but surprisingly it wasn't 3.5 it demanded, but 2.0.50727. An additional puzzle is the version of MSBuild I'm using to compile the release version of the program, which I found in the framework 3.5 directory but claims to be framework 2.0 and build engine 3.5. I just guessed it was the right version of MSBuild to use because it seemed to correspond with the highest framework version F# seems to be able to target, but should I be using a different version? Anyone have any idea what's going on? C:\Windows>dir/s msbuild.exe Volume in drive C is OS Volume Serial Number is 0422-C2D0 Directory of C:\Windows\Microsoft.NET\Framework\v2.0.50727 27/07/2008 19:03 69,632 MSBuild.exe 1 File(s) 69,632 bytes Directory of C:\Windows\Microsoft.NET\Framework\v3.5 29/07/2008 23:40 91,136 MSBuild.exe 1 File(s) 91,136 bytes Directory of C:\Windows\Microsoft.NET\Framework\v4.0.30319 18/03/2010 16:47 132,944 MSBuild.exe 1 File(s) 132,944 bytes Directory of C:\Windows\winsxs\x86_msbuild_b03f5f7f11d50a3a_6.0.6000.16386_none_815e96e1b0e084be 20/10/2006 02:14 69,632 MSBuild.exe 1 File(s) 69,632 bytes Directory of C:\Windows\winsxs\x86_msbuild_b03f5f7f11d50a3a_6.0.6000.16720_none_81591d45b0e55432 27/07/2008 19:00 69,632 MSBuild.exe 1 File(s) 69,632 bytes Directory of C:\Windows\winsxs\x86_msbuild_b03f5f7f11d50a3a_6.0.6000.20883_none_6a9133e9ca879925 27/07/2008 18:55 69,632 MSBuild.exe 1 File(s) 69,632 bytes C:\Windows>cd Microsoft.NET\Framework\v3.5 C:\Windows\Microsoft.NET\Framework\v3.5>msbuild /ver Microsoft (R) Build Engine Version 3.5.30729.1 [Microsoft .NET Framework, Version 2.0.50727.3053] Copyright (C) Microsoft Corporation 2007. All rights reserved. 3.5.30729.1

    Read the article

  • What Algorithm will Find New Longtail Keywords for *keyword* in PPC

    - by Becci
    I am looking for the algorithm (or combo) that would allow someone to find new longtail PPC search phrases based on say one corekeyword. Eg #1 word word corekeyword eg #2 word corekeyword word Google search tool allows a limited number vertically - mostly of eg#1 (https://adwords.google.com.au/select/KeywordToolExternal) I also know of other PPC apps that allow more volume than google adwords keyword tool, But I want to find other combos that mention the corekeyword & then naturally sort for the highest volume searched. Working example of exact match: corekeyword: copywriter (40,500 searches a month) google will serve up: become a copywriter (480 searches globally/month in english) But if I specifically look up: How to become a copywriter (720 searches a month) This exact longtail keyword phrase has 300 more searches than the 3 word version spat out by google. I want the algorithm to find any other highly search exact longtials like: how to become a copywriter Simply because it was save significant $ finding other longtail keywords after your campaign has been running an made google lots of money. I don't want a concantenation algorithm (I already have one of those), because hypothetically, I don't know what keywords will be that I want to find. Any gurus out there? Becci

    Read the article

  • android logging sdcard

    - by Abhi Rao
    Hello, With Android-Emulator I am not able to write/create a file on the SD Card (for logging). Here is what I have done so far - Run mksdcard 8192K C:\android-dev\emu_sdcard\emu_logFile - Create a new AVD, when assign emu_logFile to it so that when I view the AVD Details it says C:\android-dev\emu_sdcard\emu_logFile against the field "SD Card" - Here is the relevant code public class ZLogger { static PrintWriter zLogWriter = null; private static void Initialize() { try { File sdDir = Environment.getExternalStorageDirectory(); if (sdDir.canWrite()) { : File logFile = new File (sdDir, VERSION.RELEASE + "_" + ".log"); FileWriter logFileWriter = new FileWriter(logFile); zLogWriter = new PrintWriter(logFileWriter); zLogWriter.write("\n\n - " + date + " - \n"); } } catch (IOException e) { Log.e("ZLogger", "Count not write to file: " + e.getMessage()); } } sdDir.canWrite returns false - please note it not the exception from adb shell when I do ls I see sdcard as link to /mnt/sdcard. When I do ls -l /mnt here is what I see ls -l /mnt ls -l /mnt drwxr-xr-x root system 2010-12-24 03:41 asec drwx------ root root 2010-12-24 03:41 secure d--------- system system 2010-12-24 03:41 sdcard whereas if I go to the directory where I created emu_sdcard - I see a lock has been issued, as shown here C:dir android-dev\emu_sdcard Volume in drive C is Preload Volume Serial Number is A4F3-6C29 Directory of C:\android-dev\emu_sdcard 12/24/2010 03:41 AM . 12/24/2010 03:41 AM .. 12/24/2010 03:17 AM 8,388,608 emu_logFile 12/24/2010 03:41 AM emu_logFile.lock 1 File(s) 8,388,608 bytes 3 Dir(s) 50,347,704,320 bytes free I have looked at these and other SO questions Android Emulator sdcard push error: Read-only file system (2) Not able to view SDCard folder in the FileExplorer of Android Eclipse I have added the following to AndroidManifest.xml **uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" ** Please let me know your thoughts - what am I missing here? Why does canWrite return false? What should I do to add permissions to sdcard?

    Read the article

  • Automatically add links to class source files under a specified directory of another project in Visu

    - by Binary255
    I want to share some class source files between two projects in Visual Studio 2008. I can't create a project for the common parts and reference it (see my comment if you are curious to why). I've managed to share some source files, but it could be a lot more neat. I've created a test solution called Commonality. The Solution Explorer of the Commonality solution which contains project One and Two: What I like: All class files under the Common folder of project One are automatically added to project Two by linking. It's mostly the same as if I would have chosen Add / Existing Item... : Add As Link on each new class source file. It's clear that these files have been linked in. The shortcut arrow symbol is marking each file icon. What I do not like: The file and folder tree structure under Common of project One isn't included. It's all flat. The linked source files are shown under the project root of project Two. It would look much less cluttered if they were located under Common like in project One. The file tree structure of the Commonality solution which contains project One and Two: $ tree /F /A Folder PATH listing for volume Cystem Volume serial number is 0713370 1337:F6A4 C:. | Commonality.sln | +---One | | One.cs | | One.csproj | | | +---bin | | \---Debug | | One.vshost.exe | | One.vshost.exe.manifest | | | +---Common | | | Common.cs | | | CommonTwo.cs | | | | | \---SubCommon | | CommonThree.cs | | | +---obj | | \---Debug | | +---Refactor | | \---TempPE | \---Properties | AssemblyInfo.cs | \---Two | Two.cs | Two.csproj | Two.csproj.user | Two.csproj~ | +---bin | \---Debug +---obj | \---Debug | +---Refactor | \---TempPE \---Properties AssemblyInfo.cs And the relevant part of project Two's project file Two.csproj: <ItemGroup> <Compile Include="..\One\Common\**\*.cs"> </Compile> <Compile Include="Two.cs" /> <Compile Include="Properties\AssemblyInfo.cs" /> </ItemGroup> How do I address what I do not like, while keeping what I like?

    Read the article

  • SendMessage to window created by AllocateHWND cause deadlock

    - by user2704265
    In my Delphi project, I derive a thread class TMyThread, and follow the advice from forums to use AllocateHWnd to create a window handle. In TMyThread object, I call SendMessage to send message to the window handle. When the messages sent are in small volume, then the application works well. However, when the messages are in large volume, the application will deadlock and lose responses. I think may be the message queue is full as in LogWndProc, there are only codes to process the message, but no codes to remove the messages from the queue, that may cause all the processed messages still exist in the queue and the queue becomes full. Is that correct? The codes are attached below: var hLogWnd: HWND = 0; procedure TForm1.FormCreate(Sender: TObject); begin hLogWnd := AllocateHWnd(LogWndProc); end; procedure TForm1.FormDestroy(Sender: TObject); begin if hLogWnd <> 0 then DeallocateHWnd(hLogWnd); end; procedure TForm1.LogWndProc(var Message: TMessage); var S: PString; begin if Message.Msg = WM_UPDATEDATA then begin S := PString(msg.LParam); try List1.Items.Add(S^); finally Dispose(S); end; end else Message.Result := DefWindowProc(hLogWnd, Message.Msg, Message.WParam, Message.LParam); end; procedure TMyThread.SendLog(I: Integer); var Log: PString; begin New(Log); Log^ := 'Log: current stag is ' + IntToStr(I); SendMessage(hLogWnd, WM_UPDATEDATA, 0, LPARAM(Log)); Dispose(Log); end;

    Read the article

  • Can't read from RSOP_RegistryPolicySetting WMI class in root\RSOP namespace

    - by JCCyC
    The class is documented in http://msdn.microsoft.com/en-us/library/aa375050%28VS.85%29.aspx And from this page it seems it's not an abstract class: http://msdn.microsoft.com/en-us/library/aa375084%28VS.85%29.aspx But whenever I run the code below I get an "Invalid Class" exception in ManagementObjectSearcher.Get(). So, does this class exist or not? ManagementScope scope; ConnectionOptions options = new ConnectionOptions(); options.Username = tbUsername.Text; options.Password = tbPassword.Password; options.Authority = String.Format("ntlmdomain:{0}", tbDomain.Text); scope = new ManagementScope(String.Format("\\\\{0}\\root\\RSOP", tbHost.Text), options); scope.Connect(); ManagementObjectSearcher searcher = new ManagementObjectSearcher(scope, new ObjectQuery("SELECT * FROM RSOP_RegistryPolicySetting")); foreach (ManagementObject queryObj in searcher.Get()) { wmiResults.Text += String.Format("id={0}\n", queryObj["id"]); wmiResults.Text += String.Format("precedence={0}\n", queryObj["precedence"]); wmiResults.Text += String.Format("registryKey={0}\n", queryObj["registryKey"]); wmiResults.Text += String.Format("valueType={0}\n", queryObj["valueType"]); } In the first link above, it lists as a requirement something called a "MOF": "Rsopcls.mof". Is this something I should have but have not? How do I obtain it? Is it necessary in the querying machine or the queried machine? Or both? I do have two copies of this file: C:\Windows>dir rsop*.mof /s Volume in drive C has no label. Volume Serial Number is 245C-A6EF Directory of C:\Windows\System32\wbem 02/11/2006 05:22 100.388 rsop.mof 1 File(s) 100.388 bytes Directory of C:\Windows\winsxs\x86_microsoft-windows-grouppolicy-base-mof_31bf3856ad364e35_6.0.6001.18000_none_f2c4356a12313758 19/01/2008 07:03 100.388 rsop.mof 1 File(s) 100.388 bytes Total Files Listed: 2 File(s) 200.776 bytes 0 Dir(s) 6.625.456.128 bytes free

    Read the article

  • Automatically add links to class source files under a specified directory of an another project in V

    - by Binary255
    I want to share some class source files between two projects in Visual Studio 2008. I can't create a project for the common parts and reference it (see my comment if you are curious to why). I've managed to share some source files, but it could be a lot more neat. I've created a test solution called Commonality. The Solution Explorer of the Commonality solution which contains project One and Two: What I like: All class files under the Common folder of project One are automatically added to project Two by linking. It's mostly the same as if I would have chosen Add / Existing Item... : Add As Link on each new class source file. It's clear that these files have been linked in. The shortcut arrow symbol is marking each file icon. What I do not like: The file and folder tree structure under Common of project One isn't included. It's all flat. The linked source files are shown under the project root of project Two. It would look much less cluttered if they were located under Common like in project One. The file tree structure of the Commonality solution which contains project One and Two: $ tree /F /A Folder PATH listing for volume Cystem Volume serial number is 0713370 1337:F6A4 C:. | Commonality.sln | +---One | | One.cs | | One.csproj | | | +---bin | | \---Debug | | One.vshost.exe | | One.vshost.exe.manifest | | | +---Common | | | Common.cs | | | CommonTwo.cs | | | | | \---SubCommon | | CommonThree.cs | | | +---obj | | \---Debug | | +---Refactor | | \---TempPE | \---Properties | AssemblyInfo.cs | \---Two | Two.cs | Two.csproj | Two.csproj.user | Two.csproj~ | +---bin | \---Debug +---obj | \---Debug | +---Refactor | \---TempPE \---Properties AssemblyInfo.cs And the relevant part of project Two's project file Two.csproj: <ItemGroup> <Compile Include="..\One\Common\**\*.cs"> </Compile> <Compile Include="Two.cs" /> <Compile Include="Properties\AssemblyInfo.cs" /> </ItemGroup> How do I address what I do not like, while keeping what I like?

    Read the article

  • source of historical stock data

    - by rmeador
    I'm trying to make a stock market simulator (perhaps eventually growing into a predicting AI), but I'm having trouble finding data to use. I'm looking for a (hopefully free) source of historical stock market data. Ideally, it would be a very fine-grained (second or minute interval) data set with price and volume of every symbol on NASDAQ and NYSE (and perhaps others if I get adventurous). Does anyone know of a source for such info? I found this question which indicates Yahoo offers historical data in CSV format, but I've been unable to find out how to get it in a cursory examination of the site linked. I also don't like the idea of downloading the data piecemeal in CSV files... I imagine Yahoo would get upset and shut me off after the first few thousand requests. I also discovered another question that made me think I'd hit the jackpot, but unfortunately that OpenTick site seems to have closed its doors... too bad, since I think they were exactly what I wanted. I'd also be able to use data that's just open/close price and volume of every symbol every day, but I'd prefer all the data if I can get it. Any other suggestions?

    Read the article

  • SoundChannel, removeEventHandler, AS3

    - by pixelGreaser
    Is there a better way to use the sound channel is AS3? This works, but I hate it when I tap the play button twice and it starts doubling. Please advise. var mySound:Sound = new Sound(); playButton.addEventListener (MouseEvent.CLICK, myPlayButtonHandler); var myChannel:SoundChannel = new SoundChannel(); function myPlayButtonHandler (e:MouseEvent):void { myChannel = mySound.play(); } stopButton.addEventListener(MouseEvent.CLICK, onClickStop); function onClickStop(e:MouseEvent):void{ myChannel.stop(); } /*-----------------------------------------------------------------*/ //global sound buttons, add instance of 'killswitch' and 'onswitch' to stage killswitch.addEventListener(MouseEvent.CLICK, clipKillSwitch); function clipKillSwitch(e:MouseEvent):void{ var transform1:SoundTransform=new SoundTransform(); transform1.volume=0; flash.media.SoundMixer.soundTransform=transform1; } onswitch.addEventListener(MouseEvent.CLICK, clipOnSwitch); function clipOnSwitch(e:MouseEvent):void{ var transform1_:SoundTransform=new SoundTransform(); transform1_.volume=1; flash.media.SoundMixer.soundTransform=transform1_; }

    Read the article

  • Yahoo Query Language Problem

    - by Damiano
    Hello everybody! Today, I've started with Yahoo Query Language. I would use it to retrive stocks details, so I'm talking about Yahoo Finance. I think there is a bug on this language. This is my query: select * from yahoo.finance.quoteslist where symbol='@^GSPC' I ALWAYS get 51 results! it's impossible, take a look at: http://it.finance.yahoo.com/q/cp?s=^GSPC There are 500 results! I also tried some paging parameters. select * from yahoo.finance.quoteslist(50,30) where symbol='@^GSPC' (to get from 50 to 80) select * from yahoo.finance.quoteslist(100) where symbol='@^GSPC' (to get the first 100 results) select * from yahoo.finance.quoteslist where symbol='@^GSPC' limit 30 offset 50 but ALWAYS the last stock is: <quote symbol="BBY"> <Symbol>BBY</Symbol> <LastTradePriceOnly>41.03</LastTradePriceOnly> <LastTradeDate>5/7/2010</LastTradeDate> <LastTradeTime>4:00pm</LastTradeTime> <Change>-0.48</Change> <Open>41.35</Open> <DaysHigh>42.35</DaysHigh> <DaysLow>39.60</DaysLow> <Volume>14129531</Volume> </quote> Why do I have this kind of problem? Thank you so much for your support! (P.S. I've tested it on Yahoo YQL console)

    Read the article

  • c# calling process "cannot find the file specified"

    - by laura
    I'm a c# newbie so bear with me. I'm trying to call "pslist" from PsTools from a c# app, but I keep getting "The system cannot find the file specified". I thought I read somewhere on google that the exe should be in c:\windows\system32, so I tried that, still nothing. Even trying the full path to c:\windows\system32\PsList.exe is not working. I can open other things like notepad or regedit. Any ideas? C:\WINDOWS\system32dir C:\WINDOWS\SYSTEM32\PsList.exe Volume in drive C has no label. Volume Serial Number is ECC0-70AA Directory of C:\WINDOWS\SYSTEM32 04/27/2010 11:04 AM 231,288 PsList.exe 1 File(s) 231,288 bytes 0 Dir(s) 8,425,492,480 bytes free try { // Start the child process. Process p = new Process(); // Redirect the output stream of the child process. p.StartInfo.UseShellExecute = false; p.StartInfo.RedirectStandardOutput = true; //This works //p.StartInfo.FileName = @"C:\WINDOWS\regedit.EXE"; //This doesn't p.StartInfo.FileName = @"C:\WINDOWS\system32\PsList.exe"; p.Start(); // Do not wait for the child process to exit before // reading to the end of its redirected stream. p.WaitForExit(); // Read the output stream first and then wait. s1 = p.StandardOutput.ReadToEnd(); p.WaitForExit(); } catch (Exception ex) { Console.WriteLine("Exception Occurred :{0},{1}", ex.Message, ex.StackTrace.ToString()); Console.ReadLine(); }

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >