Search Results

Search found 23545 results on 942 pages for 'parallel task library'.

Page 245/942 | < Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >

  • SQLAuthority News – Presented at Bangalore DevCon August 4, 2012

    - by pinaldave
    Bangalore Devcon 2012 was a great fun. Earlier this month I was fortunate to be invited to present at Dev Con. The event was very well planned and had excellent response. There were more than 140 attendees at any time in the sessions. There were two tracks and both tracks were running parallel to each other in the Microsoft Bangalore building. The venue is fantastic and the enthusiasm of the community is impeccable. We had a total of 12 sessions during the day. I had decided to attend each session if I can. We have so many fantastic speakers and I did not want to miss any of the sessions. As sessions were running parallel, I attended every session for 30 minutes and switched to another session. I had fun doing this experiment as tit gave me a good idea about every session. I presented personally on the session of SQL Tips and Tricks for Web Developer. DBA is a very common word and every time when we say SQL Server – lots of people think of DBA in their mind, however, SQL Server is used by many developers as well. In this session I tried to cover a few of the simple concepts where developers must pay special attention while writing T-SQL code. Sometimes a very small mistake can be very fatal on performance in the future. Here are few of the photos of the event. Btw, the two sessions which clearly stand out were Vinod Kumar‘s session on Leadership and Lohith‘s session on Visual Studio Tips and Tricks. Additional Read: Following are the blog posts by community on the Bangalore DevCon Experience. I encourage you to read them all and leave a comment which one you liked the most. http://abhishekbhat.wordpress.com/2012/08/07/devcon-2012-experience/ http://praveenprajapati.wordpress.com/2012/08/07/devcon-2012-part-2/ http://tomsblogsspot.blogspot.in/2012/08/devcon-2012.html?view=classic https://manasdash.wordpress.com/2012/08/06/devcon-2012-by-bdotnet-4th-august-2012/ http://www.jagan-bhathri.com/2012/08/bangalore-developer-conference-2012-by.html Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology

    Read the article

  • Large svn external

    - by MPelletier
    I have a project which uses a large library residing in its own repository. Using: Tortoise-SVN, the server is running an enterprise edition of VisualSVN The project itself has the "standard" structure: trunk tags branches In each branch, tag, and trunk is the library, set as an external (svn:external property). If I get the entire tree, I get the library several times, which is just getting too ridiculously repetitive. Is there a recommended structure for this? Or perhaps a way not to get all externals (because other externals are much smaller, easier to manipulate)?

    Read the article

  • Time Tracking on an Agile Team

    - by Stephen.Walther
    What’s the best way to handle time-tracking on an Agile team? Your gut reaction to this question might be to resist any type of time-tracking at all. After all, one of the principles of the Agile Manifesto is “Individuals and interactions over processes and tools”.  Forcing the developers on your team to track the amount of time that they devote to completing stories or tasks might seem like useless bureaucratic red tape: an impediment to getting real work done. I completely understand this reaction. I’ve been required to use time-tracking software in the past to account for each hour of my workday. It made me feel like Fred Flintstone punching in at the quarry mine and not like a professional. Why You Really Do Need Time-Tracking There are, however, legitimate reasons to track time spent on stories even when you are a member of an Agile team.  First, if you are working with an outside client, you might need to track the number of hours spent on different stories for the purposes of billing. There might be no way to avoid time-tracking if you want to get paid. Second, the Product Owner needs to know when the work on a story has gone over the original time estimated for the story. The Product Owner is concerned with Return On Investment. If the team has gone massively overtime on a story, then the Product Owner has a legitimate reason to halt work on the story and reconsider the story’s business value. Finally, you might want to track how much time your team spends on different types of stories or tasks. For example, if your team is spending 75% of their time doing testing then you might need to bring in more testers. Or, if 10% of your team’s time is expended performing a software build at the end of each iteration then it is time to consider better ways of automating the build process. Time-Tracking in SonicAgile For these reasons, we added time-tracking as a feature to SonicAgile which is our free Agile Project Management tool. We were heavily influenced by Jeff Sutherland (one of the founders of Scrum) in the way that we implemented time-tracking (see his article http://scrum.jeffsutherland.com/2007/03/time-tracking-is-anti-scrum-what-do-you.html). In SonicAgile, time-tracking is disabled by default. If you want to use this feature then the project owner must enable time-tracking in Project Settings. You can choose to estimate using either days or hours. If you are estimating at the level of stories then it makes more sense to choose days. Otherwise, if you are estimating at the level of tasks then it makes more sense to use hours. After you enable time-tracking then you can assign three estimates to a story: Original Estimate – This is the estimate that you enter when you first create a story. You don’t change this estimate. Time Spent – This is the amount of time that you have already devoted to the story. You update the time spent on each story during your daily standup meeting. Time Left – This is the amount of time remaining to complete the story. Again, you update the time left during your daily standup meeting. So when you first create a story, you enter an original estimate that becomes the time left. During each daily standup meeting, you update the time spent and time left for each story on the Kanban. If you had perfect predicative power, then the original estimate would always be the same as the sum of the time spent and the time left. For example, if you predict that a story will take 5 days to complete then on day 3, the story should have 3 days spent and 2 days left. Unfortunately, never in the history of mankind has anyone accurately predicted the exact amount of time that it takes to complete a story. For this reason, SonicAgile does not update the time spent and time left automatically. Each day, during the daily standup, your team should update the time spent and time left for each story. For example, the following table shows the history of the time estimates for a story that was originally estimated to take 3 days but, eventually, takes 5 days to complete: Day Original Estimate Time Spent Time Left Day 1 3 days 0 days 3 days Day 2 3 days 1 day 2 days Day 3 3 days 2 days 2 days Day 4 3 days 3 days 2 days Day 5 3 days 4 days 0 days In the table above, everything goes as predicted until you reach day 3. On day 3, the team realizes that the work will require an additional two days. The situation does not improve on day 4. All of the sudden, on day 5, all of the remaining work gets done. Real work often follows this pattern. There are long periods when nothing gets done punctuated by occasional and unpredictable bursts of progress. We designed SonicAgile to make it as easy as possible to track the time spent and time left on a story. Detecting when a Story Goes Over the Original Estimate Sometimes, stories take much longer than originally estimated. There’s a surprise. For example, you discover that a new software component is incompatible with existing software components. Or, you discover that you have to go through a month-long certification process to finish a story. In those cases, the Product Owner has a legitimate reason to halt work on a story and re-evaluate the business value of the story. For example, the Product Owner discovers that a story will require weeks to implement instead of days, then the story might not be worth the expense. SonicAgile displays a warning on both the Backlog and the Kanban when the time spent on a story goes over the original estimate. An icon of a clock is displayed. Time-Tracking and Tasks Another optional feature of SonicAgile is tasks. If you enable Tasks in Project Settings then you can break stories into one or more tasks. You can perform time-tracking at the level of a story or at the level of a task. If you don’t break a story into tasks then you can enter the time left and time spent for the story. As soon as you break a story into tasks, then you can no longer enter the time left and time spent at the level of the story. Instead, the time left and time spent for a story is rolled up from its tasks. On the Kanban, you can see how the time left and time spent for each task gets rolled up into each story. The progress bar for the story is rolled up from the progress bars for each task. The original estimate is never rolled up – even when you break a story into tasks. A story’s original estimate is entered separately from the original estimates of each of the story’s tasks. Summary Not every Agile team can avoid time-tracking. You might be forced to track time to get paid, to detect when you are spending too much time on a particular story, or to track the amount of time that you are devoting to different types of tasks. We designed time-tracking in SonicAgile to require the least amount of work to track the information that you need. Time-tracking is an optional feature. If you enable time-tracking then you can track the original estimate, time left, and time spent for each story and task. You can use time-tracking with SonicAgile for free. Register at http://SonicAgile.com.

    Read the article

  • SPARC T4-4 Beats 8-CPU IBM POWER7 on TPC-H @3000GB Benchmark

    - by Brian
    Oracle's SPARC T4-4 server delivered a world record TPC-H @3000GB benchmark result for systems with four processors. This result beats eight processor results from IBM (POWER7) and HP (x86). The SPARC T4-4 server also delivered better performance per core than these eight processor systems from IBM and HP. Comparisons below are based upon system to system comparisons, highlighting Oracle's complete software and hardware solution. This database world record result used Oracle's Sun Storage 2540-M2 arrays (rotating disk) connected to a SPARC T4-4 server running Oracle Solaris 11 and Oracle Database 11g Release 2 demonstrating the power of Oracle's integrated hardware and software solution. The SPARC T4-4 server based configuration achieved a TPC-H scale factor 3000 world record for four processor systems of 205,792 QphH@3000GB with price/performance of $4.10/QphH@3000GB. The SPARC T4-4 server with four SPARC T4 processors (total of 32 cores) is 7% faster than the IBM Power 780 server with eight POWER7 processors (total of 32 cores) on the TPC-H @3000GB benchmark. The SPARC T4-4 server is 36% better in price performance compared to the IBM Power 780 server on the TPC-H @3000GB Benchmark. The SPARC T4-4 server is 29% faster than the IBM Power 780 for data loading. The SPARC T4-4 server is up to 3.4 times faster than the IBM Power 780 server for the Refresh Function. The SPARC T4-4 server with four SPARC T4 processors is 27% faster than the HP ProLiant DL980 G7 server with eight x86 processors on the TPC-H @3000GB benchmark. The SPARC T4-4 server is 52% faster than the HP ProLiant DL980 G7 server for data loading. The SPARC T4-4 server is up to 3.2 times faster than the HP ProLiant DL980 G7 for the Refresh Function. The SPARC T4-4 server achieved a peak IO rate from the Oracle database of 17 GB/sec. This rate was independent of the storage used, as demonstrated by the TPC-H @3000TB benchmark which used twelve Sun Storage 2540-M2 arrays (rotating disk) and the TPC-H @1000TB benchmark which used four Sun Storage F5100 Flash Array devices (flash storage). [*] The SPARC T4-4 server showed linear scaling from TPC-H @1000GB to TPC-H @3000GB. This demonstrates that the SPARC T4-4 server can handle the increasingly larger databases required of DSS systems. [*] The SPARC T4-4 server benchmark results demonstrate a complete solution of building Decision Support Systems including data loading, business questions and refreshing data. Each phase usually has a time constraint and the SPARC T4-4 server shows superior performance during each phase. [*] The TPC believes that comparisons of results published with different scale factors are misleading and discourages such comparisons. Performance Landscape The table lists the leading TPC-H @3000GB results for non-clustered systems. TPC-H @3000GB, Non-Clustered Systems System Processor P/C/T – Memory Composite(QphH) $/perf($/QphH) Power(QppH) Throughput(QthH) Database Available SPARC Enterprise M9000 3.0 GHz SPARC64 VII+ 64/256/256 – 1024 GB 386,478.3 $18.19 316,835.8 471,428.6 Oracle 11g R2 09/22/11 SPARC T4-4 3.0 GHz SPARC T4 4/32/256 – 1024 GB 205,792.0 $4.10 190,325.1 222,515.9 Oracle 11g R2 05/31/12 SPARC Enterprise M9000 2.88 GHz SPARC64 VII 32/128/256 – 512 GB 198,907.5 $15.27 182,350.7 216,967.7 Oracle 11g R2 12/09/10 IBM Power 780 4.1 GHz POWER7 8/32/128 – 1024 GB 192,001.1 $6.37 210,368.4 175,237.4 Sybase 15.4 11/30/11 HP ProLiant DL980 G7 2.27 GHz Intel Xeon X7560 8/64/128 – 512 GB 162,601.7 $2.68 185,297.7 142,685.6 SQL Server 2008 10/13/10 P/C/T = Processors, Cores, Threads QphH = the Composite Metric (bigger is better) $/QphH = the Price/Performance metric in USD (smaller is better) QppH = the Power Numerical Quantity QthH = the Throughput Numerical Quantity The following table lists data load times and refresh function times during the power run. TPC-H @3000GB, Non-Clustered Systems Database Load & Database Refresh System Processor Data Loading(h:m:s) T4Advan RF1(sec) T4Advan RF2(sec) T4Advan SPARC T4-4 3.0 GHz SPARC T4 04:08:29 1.0x 67.1 1.0x 39.5 1.0x IBM Power 780 4.1 GHz POWER7 05:51:50 1.5x 147.3 2.2x 133.2 3.4x HP ProLiant DL980 G7 2.27 GHz Intel Xeon X7560 08:35:17 2.1x 173.0 2.6x 126.3 3.2x Data Loading = database load time RF1 = power test first refresh transaction RF2 = power test second refresh transaction T4 Advan = the ratio of time to T4 time Complete benchmark results found at the TPC benchmark website http://www.tpc.org. Configuration Summary and Results Hardware Configuration: SPARC T4-4 server 4 x SPARC T4 3.0 GHz processors (total of 32 cores, 128 threads) 1024 GB memory 8 x internal SAS (8 x 300 GB) disk drives External Storage: 12 x Sun Storage 2540-M2 array storage, each with 12 x 15K RPM 300 GB drives, 2 controllers, 2 GB cache Software Configuration: Oracle Solaris 11 11/11 Oracle Database 11g Release 2 Enterprise Edition Audited Results: Database Size: 3000 GB (Scale Factor 3000) TPC-H Composite: 205,792.0 QphH@3000GB Price/performance: $4.10/QphH@3000GB Available: 05/31/2012 Total 3 year Cost: $843,656 TPC-H Power: 190,325.1 TPC-H Throughput: 222,515.9 Database Load Time: 4:08:29 Benchmark Description The TPC-H benchmark is a performance benchmark established by the Transaction Processing Council (TPC) to demonstrate Data Warehousing/Decision Support Systems (DSS). TPC-H measurements are produced for customers to evaluate the performance of various DSS systems. These queries and updates are executed against a standard database under controlled conditions. Performance projections and comparisons between different TPC-H Database sizes (100GB, 300GB, 1000GB, 3000GB, 10000GB, 30000GB and 100000GB) are not allowed by the TPC. TPC-H is a data warehousing-oriented, non-industry-specific benchmark that consists of a large number of complex queries typical of decision support applications. It also includes some insert and delete activity that is intended to simulate loading and purging data from a warehouse. TPC-H measures the combined performance of a particular database manager on a specific computer system. The main performance metric reported by TPC-H is called the TPC-H Composite Query-per-Hour Performance Metric (QphH@SF, where SF is the number of GB of raw data, referred to as the scale factor). QphH@SF is intended to summarize the ability of the system to process queries in both single and multiple user modes. The benchmark requires reporting of price/performance, which is the ratio of the total HW/SW cost plus 3 years maintenance to the QphH. A secondary metric is the storage efficiency, which is the ratio of total configured disk space in GB to the scale factor. Key Points and Best Practices Twelve Sun Storage 2540-M2 arrays were used for the benchmark. Each Sun Storage 2540-M2 array contains 12 15K RPM drives and is connected to a single dual port 8Gb FC HBA using 2 ports. Each Sun Storage 2540-M2 array showed 1.5 GB/sec for sequential read operations and showed linear scaling, achieving 18 GB/sec with twelve Sun Storage 2540-M2 arrays. These were stand alone IO tests. The peak IO rate measured from the Oracle database was 17 GB/sec. Oracle Solaris 11 11/11 required very little system tuning. Some vendors try to make the point that storage ratios are of customer concern. However, storage ratio size has more to do with disk layout and the increasing capacities of disks – so this is not an important metric in which to compare systems. The SPARC T4-4 server and Oracle Solaris efficiently managed the system load of over one thousand Oracle Database parallel processes. Six Sun Storage 2540-M2 arrays were mirrored to another six Sun Storage 2540-M2 arrays on which all of the Oracle database files were placed. IO performance was high and balanced across all the arrays. The TPC-H Refresh Function (RF) simulates periodical refresh portion of Data Warehouse by adding new sales and deleting old sales data. Parallel DML (parallel insert and delete in this case) and database log performance are a key for this function and the SPARC T4-4 server outperformed both the IBM POWER7 server and HP ProLiant DL980 G7 server. (See the RF columns above.) See Also Transaction Processing Performance Council (TPC) Home Page Ideas International Benchmark Page SPARC T4-4 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Sun Storage 2540-M2 Array oracle.com OTN Disclosure Statement TPC-H, QphH, $/QphH are trademarks of Transaction Processing Performance Council (TPC). For more information, see www.tpc.org. SPARC T4-4 205,792.0 QphH@3000GB, $4.10/QphH@3000GB, available 5/31/12, 4 processors, 32 cores, 256 threads; IBM Power 780 QphH@3000GB, 192,001.1 QphH@3000GB, $6.37/QphH@3000GB, available 11/30/11, 8 processors, 32 cores, 128 threads; HP ProLiant DL980 G7 162,601.7 QphH@3000GB, $2.68/QphH@3000GB available 10/13/10, 8 processors, 64 cores, 128 threads.

    Read the article

  • Python urllib.urlopen IOError

    - by Michael
    So I have the following lines of code in a function sock = urllib.urlopen(url) html = sock.read() sock.close() and they work fine when I call the function by hand. However, when I call the function in a loop (using the same urls as earlier) I get the following error: > Traceback (most recent call last): File "./headlines.py", line 256, in <module> main(argv[1:]) File "./headlines.py", line 37, in main write_articles(headline, output_folder + "articles_" + term +"/") File "./headlines.py", line 232, in write_articles print get_blogs(headline, 5) File "/Users/michaelnussbaum08/Documents/College/Sophmore_Year/Quarter_2/Innovation/Headlines/_code/get_content.py", line 41, in get_blogs sock = urllib.urlopen(url) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib.py", line 87, in urlopen return opener.open(url) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib.py", line 203, in open return getattr(self, name)(url) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib.py", line 314, in open_http if not host: raise IOError, ('http error', 'no host given') IOError: [Errno http error] no host given Any ideas?

    Read the article

  • Unable to load libsctp.so for non root user

    - by sankoz
    I have a Linux application that uses the libsctp.so library. When I run it as root, it runs fine. But when I run it as an ordinary user, it gives the following error: error while loading shared libraries: libsctp.so.1: cannot open shared object file: No such file or directory But, when I do ldd as ordinary user, it is able to see the library: [sanjeev@devtest6 src]$ ldd myapp ... ... libsctp.so.1 => /usr/local/lib/libsctp.so.1 (0x00d17000) [sanjeev@devtest6 src]$ ls -lL /usr/local/lib/libsctp.so.1 -rwxrwxrwx 1 root root 27430 2009-06-29 11:26 /usr/local/lib/libsctp.so.1 [sanjeev@devtest6 src]$ What could be wrong? How is the ldd is able to find libsctp.so, but when actually running the app, it is not able to find the same library?

    Read the article

  • Plan Operator Tuesday round-up

    - by Rob Farley
    Eighteen posts for T-SQL Tuesday #43 this month, discussing Plan Operators. I put them together and made the following clickable plan. It’s 1000px wide, so I hope you have a monitor wide enough. Let me explain this plan for you (people’s names are the links to the articles on their blogs – the same links as in the plan above). It was clearly a SELECT statement. Wayne Sheffield (@dbawayne) wrote about that, so we start with a SELECT physical operator, leveraging the logical operator Wayne Sheffield. The SELECT operator calls the Paul White operator, discussed by Jason Brimhall (@sqlrnnr) in his post. The Paul White operator is quite remarkable, and can consume three streams of data. Let’s look at those streams. The first pulls data from a Table Scan – Boris Hristov (@borishristov)’s post – using parallel threads (Bradley Ball – @sqlballs) that pull the data eagerly through a Table Spool (Oliver Asmus – @oliverasmus). A scalar operation is also performed on it, thanks to Jeffrey Verheul (@devjef)’s Compute Scalar operator. The second stream of data applies Evil (I figured that must mean a procedural TVF, but could’ve been anything), courtesy of Jason Strate (@stratesql). It performs this Evil on the merging of parallel streams (Steve Jones – @way0utwest), which suck data out of a Switch (Paul White – @sql_kiwi). This Switch operator is consuming data from up to four lookups, thanks to Kalen Delaney (@sqlqueen), Rick Krueger (@dataogre), Mickey Stuewe (@sqlmickey) and Kathi Kellenberger (@auntkathi). Unfortunately Kathi’s name is a bit long and has been truncated, just like in real plans. The last stream performs a join of two others via a Nested Loop (Matan Yungman – @matanyungman). One pulls data from a Spool (my post – @rob_farley) populated from a Table Scan (Jon Morisi). The other applies a catchall operator (the catchall is because Tamera Clark (@tameraclark) didn’t specify any particular operator, and a catchall is what gets shown when SSMS doesn’t know what to show. Surprisingly, it’s showing the yellow one, which is about cursors. Hopefully that’s not what Tamera planned, but anyway...) to the output from an Index Seek operator (Sebastian Meine – @sqlity). Lastly, I think everyone put in 110% effort, so that’s what all the operators cost. That didn’t leave anything for me, unfortunately, but that’s okay. Also, because he decided to use the Paul White operator, Jason Brimhall gets 0%, and his 110% was given to Paul’s Switch operator post. I hope you’ve enjoyed this T-SQL Tuesday, and have learned something extra about Plan Operators. Keep your eye out for next month’s one by watching the Twitter Hashtag #tsql2sday, and why not contribute a post to the party? Big thanks to Adam Machanic as usual for starting all this. @rob_farley

    Read the article

  • Innovation and the Role of Social Media

    - by Brian Dirking
    A very interesting post by Andy Mulholland of CAP Gemini this week – “The CIO is trapped between the CEO wanting innovation and the CFO needing compliance” – had many interesting points: “A successful move in one area won’t be recognized and rapidly implemented in other areas to multiply the benefits, or worse unsuccessful ideas will get repeated adding to the cost and time wasted. That’s where the need to really address the combination of social networking, collaboration, knowledge management and business information is required.” Without communicating what works and what doesn’t, the innovations of our organization may be lost, and the failures repeated. That makes sense. If you liked Andy Mulholland’s blog post, you need to hear Howard Beader’s presentation at Enterprise 2.0 Conference on innovation and the role of social media. (Howard will be speaking in the Market Leaders Session at 1 PM on Wednesday June 22nd). Some of the thoughts Howard will share include: • Innovation is more than just ideas, it’s getting ideas to market, and removing the obstacles that stand in the way • Innovation is about parallel processing – you can’t remove the obstacles one by one because you will get to market too late • Innovation can be about product innovation, but it can also be about process innovation This brings us to Andy’s second issue he raises: "..the need for integration with, and visibility of, processes to understand exactly how the enterprise functions and delivers on its policies…" Andy goes on to talk about this from the perspective of compliance and the CFO’s concerns. And it’s true: innovation can come both in product innovation, but also internal process innovation. And process innovation can have as much impact as product innovation.  New supply chain models can disrupt an industry overnight. Many people ignore process innovation as a benefit of social business, because it is perceived as a bottom line rather than top line impact. But it can actually impact your top line by changing your entire business model. Oracle WebCenter sits at this crossroads between product innovation and process innovation, enabling you to drive go-to-market innovations through internal social media tools, removing obstacles in parallel, and also providing you deep insight into your processes so you can identify bottlenecks and realize whole new ways of doing business. Learn more about how at the Enterprise 2.0 Conference, where Oracle will be in booth #213 showing Oracle WebCenter and Oracle Fusion Applications.

    Read the article

  • Slow Chat with Industry Experts: Developing Multithreaded Applications

    Sponsored by Intel Join the experts who created The Intel Guide for Developing Multithreaded Applications for a slow chat about multithreaded application development. Bring your questions about application threading, memory management, synchronization, programming tools and more and get answers from the parallel programming experts. Post your questions here

    Read the article

  • Slow Chat with Industry Experts: Developing Multithreaded Applications

    Sponsored by Intel Join the experts who created The Intel Guide for Developing Multithreaded Applications for a slow chat about multithreaded application development. Bring your questions about application threading, memory management, synchronization, programming tools and more and get answers from the parallel programming experts. Post your questions here

    Read the article

  • How to resolve this VC++ 6.0 linker error?

    - by fishdump
    This is a Windows Console application (actually a service) that a previous guy built 4 years ago and is installed and running. I now need to make some changes but can't even build the current version! Here is the build output: --------------------Configuration: MyApp - Win32 Debug-------------------- Compiling resources... Compiling... Main.cpp winsock.cpp Linking... LINK : warning LNK4098: defaultlib "LIBCMTD" conflicts with use of other libs; use /NODEFAULTLIB:library Main.obj : error LNK2001: unresolved external symbol _socket_dontblock Debug/MyApp.exe : fatal error LNK1120: 1 unresolved externals Error executing link.exe. MyApp.exe - 2 error(s), 1 warning(s) -------------------------------------------------------------------------- If I use /NODEFAULTLIB then I get loads of errors. The code does not actually use _socket_noblock but I can't find anything on it on the 'net. Presumably it is used by some library I am linking to but I don't know what library it is in. --- Alistair.

    Read the article

  • SSAS Native v .net Provider

    - by ACALVETT
    Recently I was investigating why a new server which is in its parallel running phase was taking significantly longer to process the daily data than the server its due to replace. The server has SQL & SSAS installed so the problem was not likely to be in the network transfer as its using shared memory. As i dug around the SQL dmv’s i noticed in sys.dm_exec_connections that the SSAS connection had a packet size of 8000 bytes instead of the usual 4096 bytes and from there i found that the datasource...(read more)

    Read the article

  • help installing odfWeave

    - by Andreas
    This is not a programming question per se - but I hope someone can help: I can't install odfWeave - it looks like the problem is with the package XML - which i can't install either. checking for xml2-config... no Cannot find xml2-config ERROR: configuration failed for package ‘XML’ * removing ‘/home/andreas/R/i486-pc-linux-gnu-library/2.10/XML’ * installing source package ‘odfWeave’ ... ** R ** inst ** preparing package for lazy loading Warning in library(pkg, character.only = TRUE, logical.return = TRUE, lib.loc = lib.loc) : there is no package called 'XML' Error : package 'XML' could not be loaded ERROR: lazy loading failed for package ‘odfWeave’ * removing ‘/home/andreas/R/i486-pc-linux-gnu-library/2.10/odfWeave’ Any help much appreciated System: ubuntu 9.04

    Read the article

  • Multiple delivery confirmations per message with Ruby SMPP

    - by Santiago Palladino
    I am using the ruby smpp library to send/receive SMS. Right now we are sending messages to two different servers, using the ruby-smpp library. One of them works perfectly, but the other one sends multiple DELIVRD confirmations for each messages. And by multiple I mean hundreds of confirmations per message in some cases. Does anyone know any possible reason behind this? I am thinking on something relative to the implementation of the protocol the company is using, since it works perfectly with the other one, and not on the lines of a bug in the specific smpp ruby library. We are using smpp v3.4.

    Read the article

  • How to link a SQlite Extension Source File into Xcode for iPhone?

    - by crunchyt
    I use a statically linked library for Sqlite in an iPhone Xcode project. I am now trying to include a .C extension to Sqlite in this project. However, I am having trouble making the Sqlite in the build SEE the extension. The statically linked Sqlite library works fine. Also the .C extension works on my desktop, and builds fine as a statically linked library in Xcode. However, the custom functions it defines are missing when called. For example, I load the extension as so with no errors. SELECT load_extension('extension_name.so'); But when I try to call a function defined in the extension, I get this message DB Error: 1 "no such function: custom_function" Does anyone know much about linking a Sqlite extension into an Xcode project?

    Read the article

  • Open source license for test code

    - by Gary
    I'm creating a project to house an iPhone library for common code for the iPhone... essentially it's a library that'll save people from finding solutions to common problems that amount to copying and pasting snippets of code. The site is located here: http://code.google.com/p/devkit-bb/ I licensed it under Eclipse, because fosters the extension of the library without requiring constraints like LGPL on object files being provided/made available, which would be the case since everything is statically linked. What I'm wondering is how/what license to apply to the unit tests? Since they essentially demonstrate how to use various interfaces and components. Thus they're designed for potential copy and paste situations, and I don't want people who might end up using this as part of the building blocks of their environment to feel like the license would prohibit that "derivative work", ie. their application or game.

    Read the article

  • python c extension, problems with dlopen on mac os

    - by Jason Sundram
    I've taken a library that is distributed as a binary lib (.a) and header, written some c++ code against it, and want to wrap the results up in a python module. I've done this here. The problem is that when importing this module on Mac OSX (I've tried 10.5 and 10.6), I get the following error: dlopen(/Library/Python/2.5/site-packages/dirac.so, 2): Symbol not found: _DisposePtr Referenced from: /Library/Python/2.5/site-packages/dirac.so Expected in: dynamic lookup This looks like symbols defined in the Carbon framework aren't being properly resolved, but I'm not sure what to do about that. I am supplying -framework Carbon to distutil.core.Extension's extra_link_args parameter, so I'm not sure what else I should do. Any help would be much appreciated.

    Read the article

  • Is there a J2SE sensor API?

    - by Martin Strandbygaard
    Does anyone know of a "standardized" Java API for working with sensors, and which is closely tied to J2ME as is the case with JSR 256? I'm writing a Java library for interfacing with a sensor network consisting of several different types of sensors (mostly simple stuff such as temperature, humidity, GPS, etc.). So far I've rolled my own interface, and users have to write apps against this. I would like to change this approach and implement a "standard" API so that implementations aren't that closely tied to my library. I've looked at JSR 256, but that really isn't a great solution as it's for J2ME, and my library is mostly used by Android devices or laptops running the full J2SE.

    Read the article

  • Exposing.NET assembly as COM 101

    - by Jan Zich
    I have trouble to expose a .NET assembly in COM. It seems that I must be missing some basic step because I think I followed all tutorials and documentation I found as well as common sense, but still when I do (in a test VBScript): Set o = CreateObject("MyLib.MyClass") It keeps saying that the object cannot be created. Here are the steps I have done: I have simple one method dummy class with no attributes. The class is in a class library which has "Make assembly COM-visible" ticked in Visual Studio. The class library is signed. The DLL is registered via RegAsm.exe with the /codebase parameter (I don’t want / cannot add the DLL to GAC). Just to be sure, I tried to copy the library to the same directory as the test VBScript, but it does not help. Edit: I should have mentioned that the I can instantiate the class in COM if I put the DLL into GAC.

    Read the article

  • Code for Parallelism Features Tour

    Last year I linked to a screencast that shows off many VS2010 features delivered by the Parallel Computing team.There have been requests for the code used to demonstrate the features. Like with all my screencasts, you can see all the code in action, so you could simply type it in. To save you doing that though, you may download the two files with the demo code here: MM.cs and Program.cs. HTH. Comments about this post welcome at the original blog.

    Read the article

  • Debugging Objective C JNI code

    - by thatidiotguy
    Here is the situation: I have a client's java project open in eclipse. It uses a JNI library created by an Xcode Objective C project. Is there any good way for me to debug the C code from eclipse when I execute the Java code? Obviously eclipse's default debugger cannot step into the jni library file and we lose the thread (thread meaning investigative thread here, not programming thread). Any advice or input is appreciated as the code base is large enough that following the client's code will be radically faster than other options. Thanks. EDIT: It should be noted that the reason that the jni library is written in Objective-C is because it is integrating with Mac OSX. It is using the Cocoa framework to integrate with the Apple speech api.

    Read the article

  • Activerecord-PostgreSQL Adapter Error

    - by Tian
    When running "rake db:migrate", i receive the following error: Please install the postgresql adapter: gem install activerecord-postgresql-adapter (dlopen(/Library/Ruby/Gems/1.8/gems/pg-0.9.0/lib/pg_ext.bundle, 9): no suitable image found. Did find: /Library/Ruby/Gems/1.8/gems/pg-0.9.0/lib/pg_ext.bundle: mach-o, but wrong architecture - /Library/Ruby/Gems/1.8/gems/pg-0.9.0/lib/pg_ext.bundle) Postgres 8.4.4 installed using the pre-built image file. Then ran sudo gem install pg to install pg-0.9.0 Config/database.yml: development: adapter: postgresql Does anyone know what the problem is?

    Read the article

  • Trying to compile VS2008 project on Win 64 bit which is custom Powershell PSSnapin

    - by Boris Kleynbok
    Library Project compiles fine for ANY CPU in VS2008 running on Win 7 64 -bit. Now in the post build following command fails when attemptiong to register library dll: PS C:\Windows\Microsoft.NET\Framework64\v2.0.50727 .\installutil C:\path\Project.dll Exception occurred while initializing the installation: System.BadImageFormatException: Could not load file or assembly 'file:///C:\path\Project.dll' or one of its dependencies. An attempt was made to load a program with an incorrect format.. Do I need to compile the project as x64 I was under impression that AnyCPU will take care of it. Alo my library does have dependencies. Do they also need to be compiled as x64 bit? Any help is appreciated.

    Read the article

  • Getting libjpeg/libTIFF.dylib error after recompiling php in OS X 10.6.3 Server

    - by leggo-my-eggo
    I've recompiled php 5.3.0 under OS X Server (10.6.3) in order to gain Freetype support in GD. As part of the process, I recompiled libjpeg-8a (and before that I tried libjpeg-7). But when I run apachectcl configtest, I get the following error: httpd: Syntax error on line 155 of /private/etc/apache2/httpd.conf: Cannot load /usr/libexec/apache2/mod_auth_user_host_apple.so into server: dlopen(/usr/libexec/apache2/mod_auth_user_host_apple.so, 10): Symbol not found: __cg_jpeg_resync_to_restart\n Referenced from: /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libTIFF.dylib\n Expected in: /usr/lib/libJPEG.dylib\n in /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libTIFF.dylib I don't really understand the error. Is there anyone who could help me figure out how to fix it? In php the gd library now seems unaware of libjpeg support.

    Read the article

  • C++ namespace alias and forward declaration

    - by Dave
    I am using a C++ third party library that places all of its classes in a versioned namespace, let's call it tplib_v44. They also define a generic namespace alias: namespace tplib = tplib_v44; If a forward-declare a member of the library in my own .h file using the generic namespace... namespace tplib { class SomeClassInTpLib; } ... I get compiler errors on the header in the third-party library (which is being included later in my .cpp implementation file): error C2386: 'tplib' : a symbol with this name already exists in the current scope If I use the version-specific namespace, then everything works fine, but then ... what's the point? What's the best way to deal with this?

    Read the article

< Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >