Search Results

Search found 9366 results on 375 pages for 'common lisp'.

Page 199/375 | < Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >

  • how to make a continuous machine gun sound-effect

    - by Jan
    I am trying to make an entity fire one or more machine-guns. For each gun I store the time between shots (1.0 / firing rate) and the time since the last shot. Also I've loaded ~10 different gun-shot sound-effects. Now, for each gun I do the following: function update(deltatime): timeSinceLastShot += deltatime if timeSinceLastShot >= timeBetweenShots + verySmallRandomValue(): timeSinceLastShot -= timeBetweenShots if gunIsFiring: displayMuzzleFlash() spawnBullet() selectRandomSound().play() But now I often get a crackling noise (which I assume is when two or more guns are firing at the same time and confuse the sound-device). My question is whether A) This a common problem and there is a well-known solution, maybe to do with the channels or something, or B) I am using a completely wrong approach to the task. I had a look at some sound-assets for other games and they used complete burst with multiple shots. I suppose I could try that, but I would like to have organic little hickups in the gun-fire (that's what the random value is for) to make the game more gritty and dirty. I am using Panda3D, but I had the exact same problem in PyGame and SDL. [edit] Thanks a lot for the answers so far! One more problem with faking it though: Now how do I stop the sound? Let's say I have an effect with 5 bangs... *bang* *bang* *bang* *bang* *bang* And I magically manage to loop it so that there's no gap or overlap if the player fires more than 5 shots. Now, what do I do if the player stops firing halfway through the third bang? How do I know how long to keep playing the sample so that the third bang is completed and I can start playing the rumbling echo of the last shot? Of course I can look up the shot/pause timing of that sound-sample and code accordingly, but it feels extremely hacky.

    Read the article

  • Simple dependency tree diagram generator

    - by foampile
    I have a need to produce a simple dependency tree diagram. The input data would be in the following simple format: ITEM_NAME DEPENDENCY ---------------------------- ITEM_101 ITEM_75 ITEM_102 ITEM_77 ITEM_102 ITEM_61 ITEM_102 ITEM_11 This means that ITEM_101 depends on ITEM_75 and ITEM_102 depends on items ITEM_77, ITEM_61 and ITEM_11. So the diagram would have items ITEM_77, ITEM_61 and ITEM_11 in one vertical level and ITEM_102 would be below it with a line connecting each of the three dependencies to ITEM_102. The same would be for ITEM_101, ITEM_75 would be somewhere above it and there would be a line connecting it. In the real world this tree represents a hierarchy of scheduling jobs. We have a very extensive workload automation hierarchy in Autosys and I have heard that its front end utility has something like this tree visual representation, however, for some reason, that utility has been disabled by admins. My business users want to see this hierarchy in an easy-to-consume format. I was hoping that I won't have to program something like this from scratch because it seems like quite a common reporting requirement and the input data is simply formatted. My question is: is there a FOSS tool that takes standardized data input and produces such a hierarchical tree? Thanks

    Read the article

  • Displaying a Grid of Data in ASP.NET MVC

    One of the most common tasks we face as a web developers is displaying data in a grid. In its simplest incarnation, a grid merely displays information about a set of records - the orders placed by a particular customer, perhaps; however, most grids offer features like sorting, paging, and filtering to present the data in a more useful and readable manner. In ASP.NET WebForms the GridView control offers a quick and easy way to display a set of records in a grid, and offers features like sorting, paging, editing, and deleting with just a little extra work. On page load, the GridView automatically renders as an HTML <table> element, freeing you from having to write any markup and letting you focus instead on retrieving and binding the data to display to the GridView. In an ASP.NET MVC application, however, developers are on the hook for generating the markup rendered by each view. This task can be a bit daunting for developers new to ASP.NET MVC, especially those who have a background in WebForms. This is the first in a series of articles that explore how to display grids in an ASP.NET MVC application. This installment starts with a walk through of creating the ASP.NET MVC application and data access code used throughout this series. Next, it shows how to display a set of records in a simple grid. Future installments examine how to create richer grids that include sorting, paging, filtering, and client-side enhancements. We'll also look at pre-built grid solutions, like the Grid component in the MvcContrib project and JavaScript-based grids like jqGrid. But first things first - let's create an ASP.NET MVC application and see how to display database records in a web page. Read on to learn more! Read More >

    Read the article

  • What's the best way to manage error logging for exceptions?

    - by Peter Boughton
    Introduction If an error occurs on a website or system, it is of course useful to log it, and show the user a polite message with a reference code for the error. And if you have lots of systems, you don't want this information dotted around - it is good to have a single centralised place for it. At the simplest level, all that's needed is an incrementing id and a serialized dump of the error details. (And possibly the "centralised place" being an email inbox.) At the other end of the spectrum is perhaps a fully normalised database that also allows you to press a button and see a graph of errors per day, or identifying what the most common type of error on system X is, whether server A has more database connection errors than server B, and so on. What I'm referring to here is logging code-level errors/exceptions by a remote system - not "human-based" issue tracking, such as done with Jira,Trac,etc. Questions I'm looking for thoughts from developers who have used this type of system, specifically with regards to: What are essential features you couldn't do without? What are good to have features that really save you time? What features might seem a good idea, but aren't actually that useful? For example, I'd say a "show duplicates" function that identifies multiple occurrence of an error (without worrying about 'unimportant' details that might differ) is pretty essential. A button to "create an issue in [Jira/etc] for this error" sounds like a good time-saver. Just to re-iterate, what I'm after is practical experiences from people that have used such systems, preferably backed-up with why a feature is awesome/terrible. (If you're going to theorise anyway, at the very least mark your answer as such.)

    Read the article

  • Network: Incoming connections work, outgoing fails

    - by anirvan
    i recently set up my own server at home to run Ubuntu 12.04 server ed. on booting up, i noticed that a message related to networking comes up, and the booting process pauses. the message read something like - waiting for network configuration and after a while - waiting another 60 seconds... on booting up, I realised that any command which requires a network connection was not working - ping, apt-get install, etc. on firing the ifup eth0 command, I get the error RTNETLINK answers: File exists. Failed to bring up eth0. I also realised, while searching the web for this problem, that this is probably one of the most common networking related issues - however, most of the questions are around setting up multiple IPs for the same machine. ifdown eth0 also fails, stating that eth0 is not configured. my /etc/network/interfaces file has a simple configuration for a static IP: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address xx.xx.xx.xx netmask xx.xx.xx.xx broadcast xx.xx.xx.xx gateway xx.xx.xx.xx dns-nameservers xx.xx.xx.xx The strangest part of this problem is that, while I can't connect to anything outside, I can ping to this particular server using the static IP configured in the interface file, and, i can even SSH into it! I'm really at ends here with this problem, and any guidance is much appreciated. Thanks!

    Read the article

  • Compiling custom kernel 3.7.x lowlatency on Ubuntu 12.04

    - by FlabbergastedPickle
    All, I have a peculiar problem with trying to compile a lowlatency flavor of the latest 3.7 kernel. I retrieved the prepatched source from the launchpad using bzr, compiled it using the usual make-kpkg using the current config file plus default options for the rest, installed the kernel and booted into it. Everything works except for the fglrx and wl drivers that I had to install in the original 12.04 lowlatency kernel. So, I tried recompiling these and succeeded with both of them (no errors were reported)--wl driver required a minor adjustment to system.h include while latest fglrx 12.11 beta11 (released yesterday, Dec. 3rd, 2012) compiled without the hitch. Yet, when I try to modprobe either module (both having in common the fact that they were built after the kernel, fglrx as a deb, and wl via the usual make/make install), I get "FATAL: no MODULENAME module found" (MODULENAME being either wl or fglrx). The graphic driver watermark shows 3D crossed out and "for testing purposes" (or "unsupported hardware," can't remember), and no fglrx or wl is loaded. More mysteriously, dmesg shows no attempt on kernel's behalf to load the said drivers, even though they are clearly in the right /lib/modules/KERNEL_VERSION folder. How is this possible? Has something fundamentally changed in 3.7 kernel that would prevent modprobing of these? I know that there is driver signing option that was merged recently but as far as I could tell the kernel config file generated by the build process had that disabled. OTOH, while building wl driver, I did get a warning that the driver was not signed... Then again, even if the kernel disallowed loading of those modules, shouldn't dmesg reflect that? Any thoughts on this one are most appreciated.

    Read the article

  • How often is seq used in Haskell production code?

    - by Giorgio
    I have some experience writing small tools in Haskell and I find it very intuitive to use, especially for writing filters (using interact) that process their standard input and pipe it to standard output. Recently I tried to use one such filter on a file that was about 10 times larger than usual and I got a Stack space overflow error. After doing some reading (e.g. here and here) I have identified two guidelines to save stack space (experienced Haskellers, please correct me if I write something that is not correct): Avoid recursive function calls that are not tail-recursive (this is valid for all functional languages that support tail-call optimization). Introduce seq to force early evaluation of sub-expressions so that expressions do not grow to large before they are reduced (this is specific to Haskell, or at least to languages using lazy evaluation). After introducing five or six seq calls in my code my tool runs smoothly again (also on the larger data). However, I find the original code was a bit more readable. Since I am not an experienced Haskell programmer I wanted to ask if introducing seq in this way is a common practice, and how often one will normally see seq in Haskell production code. Or are there any techniques that allow to avoid using seq too often and still use little stack space?

    Read the article

  • Android From Local DB (DAO) to Server sync (JSON) - Design issue

    - by Taiko
    I sync data between my local DB and a Server. I'm looking for the cleanest way to modelise all of this. I have a com.something.db package That contains a Data Helper and couple of DAO classes that represents objects stored in the db (I didn't write that part) com.something.db --public DataHelper --public Employee @DatabaseField e.g. "name" will be an actual column name in the DB -name @DatabaseField -salary etc... (all in all 50 fields) I have a com.something.sync package That contains all the implementation detail on how to send data to the server. It boils down to a ConnectionManager that is fed by different classes that implements a 'Request' interface com.something.sync --public interface ConnectionManager --package ConnectionManagerImpl --public interface Request --package LoginRequest --package GetEmployeesRequest My issue is, at some point in the sync process, I have to JSONise and de-JSONise my data (E.g. the Employee class). But I really don't feel like having the same Employee class be responsible for both his JSONisation and his actual representation inside the local database. It really doesn't feel right, because I carefully decoupled the rest, I am only stuck on this JSON thing. What should I do ? Should I write 3 Employee classes ? EmployeeDB @DatabaseField e.g. "name" will be an actual column name in the DB -name @DatabaseField -salary -etc... 50 fields EmployeeInterface -getName -getSalary -etc... 50 fields EmployeeJSON -JSON_KEY_NAME = "name" The JSON key happens to be the same as the table name, but it isn't requirement -name -JSON_KEY_SALARY = "salary" -salary -etc... 50 fields It feels like a lot of duplicates. Is there a common pattern I can use there ?

    Read the article

  • Apache virtual hosts - Resources on website not loaded when accessed from other hostname than localhost

    - by Christian Stadegaart
    Running virtual hosts on Mac OS X 10.6.8 running Apache 2.2.22. /etc/hosts is as follows: 127.0.0.1 localhost 3dweergave studio-12.fritz.box 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost Virtual hosts configuration: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot "/opt/local/www/3dweergave" ServerName 3dweergave ErrorLog "logs/3dweergave-error_log" CustomLog "logs/3dweergave-access_log" common <Directory "/opt/local/www/3dweergave"> Options Indexes FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> <VirtualHost *:80> ServerName main </VirtualHost> This will output the following settings: *:80 is a NameVirtualHost default server 3dweergave (/opt/local/apache2/conf/extra/httpd-vhosts.conf:21) port 80 namevhost 3dweergave (/opt/local/apache2/conf/extra/httpd-vhosts.conf:21) port 80 namevhost main (/opt/local/apache2/conf/extra/httpd-vhosts.conf:34) I made 3dweergave the default server by putting it first in the list. This will cause all undefined virtual hosts' names to load 3dweergave, and thus http://localhost will point to 3dweergave. Of course, normally, the first in the list is the virtual host main and localhost will point to main, but for testing purposes I switched them. When I navigate to http://localhost, my CakePHP default homepage shows as expected: Screenshot 1 But when I navigate to http://3dweergave, my CakePHP default homepage doesn't show as expected. It looks like every relative link to resources are not accepted by the server: Screenshot 2 For example, the CSS isn't loaded. When I open the source and click on the link, it opens the CSS file in the browser without errors. But when I run FireBug while loading the webpage, it seems that the CSS file isn't retrieved. (<link rel="stylesheet" type="text/css" href="/css/cake.generic.css" />) How can I fix this unwanted behaviour?

    Read the article

  • when I type apt-get -f install, I get the error message

    - by gene
    xserver-xorg-core (2:1.11.4-0ubuntu10.8) breaks xserver-xorg-video-5 and is installed. Also I can not upgrade my software, It said that the package system is broken, with detail information: The following packages have unmet dependencies: xserver-xorg-core: Depends: xserver-common (>= 2:1.11.4-0ubuntu10.8) but 2:1.11.4-0ubuntu10.8 is installed when I issue sudo apt-get update, the output seems fine the source is(sorry the output has too many links that I can not post in);http://archive.ubuntu.com Reading package lists... Done ====================== when I issue sudo apt-get dist-upgrade, the output is: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: xserver-xorg-core : Breaks: xserver-xorg-video-5 E: Unmet dependencies. Try using -f. ================== when I issue 'sudo apt-get -f install', the output is: dpkg: dependency problems prevent configuration of xserver-xorg-video-radeon: xserver-xorg-core (2:1.11.4-0ubuntu10.8) breaks xserver-xorg-video-5 and is installed. xserver-xorg-video-radeon (1:6.12.1-0ubuntu2) provides xserver-xorg-video-5. dpkg: error processing xserver-xorg-video-radeon (--configure):dependency problems leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: xserver-xorg-video-radeon E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • LiveMeeting VC PowerShell PASS – Troubleshooting SQL Server with PowerShell

    - by Laerte Junior
    Guys, join me on Wednesday July 18th 12 noon EDT (GMT -4) for a presentation called Troubleshooting SQL Server With PowerShell. It will be in English, so please make allowances for this. I’m sure that you’re aware that my English is not perfect, but it is not so bad. I will do my best, you can be sure. The registration link will be available soon from PowerShell.sqlpass.org, so I hope to see you there. It will be a session without slides. Just code; pure PowerShell code. Trust me, We will see a lot of COOL stuff.Big thanks to Aaron Nelson (@sqlvariant) for the opportunity! Here are some more details about the presentation: “Troubleshooting SQL Server with PowerShell – The Next Level’ It is normal for us to have to face poorly performing queries or even complete failure in our SQL server environments. This can happen for a variety of reasons including poor Database Designs, hardware failure, improperly-configured systems and OS Updates applied without testing. As Database Administrators, we need to take precaution to minimize the impact of these problems when they occur, and so we need the tools and methodology required to identify and solve issues quickly. In this Session we will use PowerShell to explore some common troubleshooting techniques used in our day-to-day work as s DBA. This will include a variety of such activities including Gathering Performance Counters in several servers at the same time using background jobs, identifying Blocked Sessions and Reading & filtering the SQL Error Log even if the Instance is offline The approach will be using some advanced PowerShell techniques that allow us to scale the code for multiple servers and run the data collection in asynchronous mode.

    Read the article

  • Companies and Ships

    - by TechnicalWriting
    I have worked for small, medium, large, and extra large companies and they have something in common with ships. These metaphors have been used before, I know, but I will have a go at them.The small company is like a speed boat, exciting and fast, and can turn on a dime, literally. Captain and crew share a lot of the work. A speed boat has a short range and needs to refuel a lot. It has difficulty getting through bad weather. (Small companies often live quarter to quarter. By the way, if a larger company is living quarter to quarter, it is taking on water.)The medium company is is like a battleship. It can maneuver, has a longer range, and the crew is focused on its mission. Its main concern are the other battleships trying to blow it out of the water, but it can respond quickly. Bad weather can jostle it, but it can get through most storms.The large company is like an aircraft carrier; a floating city. It is well-provisioned and can carry a specialized load for a very long range. Because of its size and complexity, it has to be well-organized to be effective and most of its functions are specialized (with little to no functional cross-over). There are many divisions and layers between Captain and crew. It is not very maneuverable; it has to set its course well in advance and have a plan of action.The extra large company is like a cruise liner. It also has to be well-organized and changes in direction are often slow. Some of the people are hard at work behind the scenes to run the ship; others can be along for the ride. They sail the same routes over and over again (often happily) with the occasional cosmetic face-lift to the ship and entertainment. It should stay in warm, friendly waters and avoid risky speed through fields of ice bergs.I have enjoyed my career on the various Ships of Technical Writing, but I get the most of my juice from the battleship where I am closer to the campaign and my contributions have the greater impact on success.Mark Metcalfewww.linkedin.com/in/MarkMetcalfe

    Read the article

  • How to debug lack of sound in Asus EEE PC

    - by Kalmar
    I have an Asus EEE PC 1225B with fresh Lubuntu 12.04. And no sound. It doesn't seem to be some common problem, so I have to make some research what's up. I tried running alsamixer, so I know I have Realtek ALC269VB with nothing muted unexpectedly. What can I do next to identify and solve the problem? Additional info: alsamixer shows two cards: HD-Audio Generic and HDA ATI-SB (Realtek ALC269VB); the first one is muted. ~$ aplay ALSA lib pcm_dmix.c:1018:(snd_pcm_dmix_open) unable to open slave aplay: main:682: blad otwierania audio: Nie ma takiego pliku ani katalogu The Polish part can be translated as "error opening audio: There is no such file or directory". ~$ sudo lspci -v | grep -A7 -i "audio" 00:01.1 Audio device: Advanced Micro Devices [AMD] nee ATI Wrestler HDMI Audio [Radeon HD 6250/6310] Subsystem: ASUSTeK Computer Inc. Device 103b Flags: bus master, fast devsel, latency 0, IRQ 44 Memory at feb44000 (32-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 3 Capabilities: [58] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [100] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?> -- 00:14.2 Audio device: Advanced Micro Devices [AMD] nee ATI SBx00 Azalia (Intel HDA) (rev 40) Subsystem: ASUSTeK Computer Inc. Device 103b Flags: bus master, slow devsel, latency 32, IRQ 16 Memory at feb40000 (64-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel

    Read the article

  • Challenge Ends on Friday!

    - by Yolande Poirier
    This is your last chance to win a JavaOne trip. Submit a project video and code for the IoT Developer Challenge by this Friday, May 30.  12 JavaOne trips will be awarded to 3 professional teams and one student team. Members of two student teams will win laptops and certification training vouchers. Ask your last minute questions on the coaching form or the Challenge forum. They will be answered promptly. Your project video should explain how your project works. Any common video format such as mp4, avi, mov is fine. Your project must use Java Embedded - whether it is Java SE Embedded or ME Embedded - with the hardware of your choice, including any devices, boards and IoT technology. The project will be judged based on the project implementation, innovation and business usefulness. More details on the IoT Developer Challenge website  Just for fun! Here is a video of Vinicius Senger giving a tour of his home lab, and showing his boards and gadgets. &lt;span id=&quot;XinhaEditingPostion&quot;&gt;&lt;/span&gt;

    Read the article

  • Which pattern is best for large project

    - by shamim
    I have several years of software development experience, but I am not a keen and adroit programmer, to perform better I need helping hands. Recently I engaged in an ERP project. For this project want a very effective structure, which will be easily maintainable and have no compromise about performance issue. Below structures are now present in my old projects. Entity Layer BusinessLogic Layer. DataLogic Layer UI Layer. Bellow picture describe how they are internally connected. For my new project want to change my project structure, I want to follow below steps: Core Layer(common) BLL DAL Model UI Bellow picture describe how they are internally connected. Though goggling some initial type question’s are obscure to me, they are : For new project want to use Entity framework, is it a good idea? Will it increase my project performance? Will it more maintainable than previous structure? Entity Framework core disadvantages/benefits are? For my project need help to select best structure. Will my new structure be better than the old one?

    Read the article

  • How Can I Effectively Interview an Oracle Candidate?

    - by Tim Medora
    First, I browsed through SO for matching questions and didn't find one, but please point me in the right direction if this exact question has already been asked. I work with and around programmers of various skill levels on various platforms. I would consider my skills to be strong in terms of relational database design, query development, and basic performance tuning and administration. I'm mid-level when it comes to database theory. My team is looking to me to ensure that we have the best talent on staff, in this case, an engineer experienced in Oracle administration. To me, a well-rounded database administrator, regardless of platform, should also be competent in developing against the database so that is also a requirement. However my database skills are centralized around SQL Server 200x with experience in a few other products like SAP MaxDB, Access, and FoxPro. How can I thoroughly assess the skills of an Oracle engineer? I can ask high-level database theory questions and talk about routine tasks that are common across platforms, but I want to dig deep enough that I can be confident in the people I hire. Normally, I would alternate very specific questions that have a right/wrong answer with architectural questions that might have several valid answers. Does anyone have an interview template, specific questions, or any other knowledge that they can share? Even knowing the meaningful Oracle-related certifications would be a help. Thank you. EDIT: All the answers have been very helpful so far and I have given upvotes to everyone. I'm surprised that there are already 3 close votes on this question as "off topic". To be clear, I am specifically asking how a MS SQL Server engineer (like myself) can effectively interview a person with different but symbiotic skills. The question has already received specific, technical answers which have improved my own database design and programming skills. If this is more appropriate as a community wiki, please convert it.

    Read the article

  • Part 4: Development Standards or How to share

    - by volker.eckardt(at)oracle.com
    Although we usually introduce the custom development part in EBS projects as “a small piece only” and “we will avoid as best as possible”, the development effort can be enormous and should therefore be well addressed by project standards. Any additional solution or additional software tool or product shall influence the custom development rules (by adding, removing or replacing sections). It is very common in EBS projects to create a so called “MD.030 Development Standards” document and put everything what’s related to development conventions into it. This document gets approval and will be shared among all developers. Later, additional sections have to be added, and usually the development lead is responsible for doing this. However, sometimes used development techniques are not documented properly, and therefore the development solutions deviate from each other, or from the initially agreed standards. My advice would be the following: keep the MD.030 as a base document, and add a Wiki on top. The “Development Wiki” covers the following: Collect input from every developer without updating the MD.030 directly Collect additional topics that might need further specification Allow a discussion about such topics by reviewing/updating the wiki directly Add also decisions or open questions right into it. In one of my own projects we were using this “Developer Wiki” quite extensive, and my experience is very positive. We had different sections in it, good cross references, but also additional material like code templates, links to external web pages etc. By using this wiki, the development standards became “owned” by the right group of people, the developers. They recognized that information sharing can improve the overall development quality, but will also reduce the workload on individuals. Finally, the wiki was much more accurate and helpful for the daily development work than our initial MD.030, and we all decided to retire the document completely. Summary: Information sharing in the development area is very important! The usual “MD.030 Development Standards“ is a good starting point, but should be combined with a “Development Wiki”, allowing everyone to address and discuss necessary improvements. A well-structured Wiki can replace the document in some sections completely. Side Note: The corresponding task in Oracle OUM (Oracle Unified Method) is DS.050 ‘Determine Design and Build Standards’ Volker

    Read the article

  • Oracle Database Appliance - How to Sell a Unique Product : Webcast Replay

    - by Cinzia Mascanzoni
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Learn about: ODA Benefits : Fast, Easy, Cost Efficient, Highly Reliable Feedback from early Customer Wins : What can we learn? Objection Handling : Overcoming the most common customer questions Going beyond the Database: The ODA Eco System for applications, backup & more If you missed the  webcasts in April, go on the EMEA VAD Resource Center - Enablement Tab, click here and follow the instruction to access the replay.

    Read the article

  • I have discovered a fundamental truth about TV shows and plots

    - by Steve Loethen
    For years, we have all known (okay, maybe I give people too much credit) that there is a very small finite number of plots.  I propose a project.  Let’s use the blogosphere to catalog those plotlines, and then find and document the episodes of shows that use them, including the show title. As evidence, how many shows have used the following plot line:? The standard “evil twin” plotline?  Once relegated to soaps, it has show up in crime shows, with the twist of DNA.  Step one, concentrate one this one.  Tell me about every show you recall that has used this plot.  I will collect and document the shows on my website (www.loethen.net) and we can build a database of the plots. Step two, what other common themes should I offer up?  How about the bigamist plot line?  The “bad guy was dead” plot line (revenge from the dead"?  The “vast government conspiracy” plotline? Let the games begin….

    Read the article

  • BizTalk 2009 - How do I do t"HAT"?

    - by StuartBrierley
    In my previous life working with BizTalk Server 2004, I came to view HAT (the Health and Activity Tracking tool) as one of my first ports of call in the case of problems with any of our BizTalk solutions.  When you move to BizTalk Server 2009 it is quickly apparent that HAT is no longer with us. HAT was useful in BizTalk 2004 mainly as it provided developers and administrators with a number of useful queries and views of what was going on inside BizTalk at runtime; when and what type of messages were received and sent, what messages had been suspended, what orchestration were running or suspended, you could even follow the process flow of a message or orchestration to see what was going on. With BizTalk Server 2009 much of the functionality of HAT can now be found in the BizTalk Administration console.  Select a BizTalk Group and you will be shown the Group Hub Overview page.  This provides a number of default queries that replicate some of those found in the old HAT. You can also use the Group Hub page to create new queries.  These can then be saved and loaded in other Group Hub instances - useful for creating queries in development for later use in Test, Psuedo-Live and Live environments. In the next few posts I am going to look at some of the common queries that we might miss from HAT and recreate them (or something close) using the new query option. Messages - last 100 received Messages - last 100 sent Messages - last 50 suspended Service instances - last 100 I have yet to try the updated Admin-HAT-Console in anger, and after using old-HAT for so long it may take some getting uesd to, but so far I would say that moving the HAT functionality into the BizTalk Administration console was probably the correct way to go.  Having one tool as the place to look for the combined functionality on offer certainly seems to be the sensible option.

    Read the article

  • Windows 8 Store App Crash Logs

    - by David Paquette
    I was recently working on a Windows 8 app, and the application was crashing occasionally.  When resuming the application, the app would crash and close immediately without providing any feedback or information on what went wrong.  The crash was very difficult to reproduce, and I could never get the crash to occur when I was debugging through Visual Studio.  My app was crashing, and I had no idea what was going wrong!  HELP!!! After doing some digging, I found that when a Windows 8 Store App crashes, an error is logged in Windows Administrative Events.  You can view the details of any app crash by launching the Event Viewer and selecting Administrative Events under Custom Views.  The Source of the error will be listed as AppHost.  AppHost is the process that runs your Windows 8 Store App.  The error details contain all the information you would expect to find, including a stack trace and line numbers.   Windows 8 Tip:  A shortcut for launching the Event Viewer in Windows 8.  Right click on the bottom left corner of your desktop (where you normally click to go to the Start Screen).  A menu will appear with shortcuts to a number of common system tasks such as Event Viewer, Task Manager, Command Prompt, and Device Manager.

    Read the article

  • Who can change the View in MVC?

    - by Luke
    I'm working on a thick client graph displaying and manipulation application. I'm trying to apply the MVC pattern to our 3D visualization component. Here is what I have for the Model, View, and Controller: Model - The graph and it's metadata. This includes vertices, edges, and the attributes of each. It does not contain position information, icons, colors, or anything display related. View - This would commonly be called a scene graph. It includes the 3D display information, texture information, color information, and anything else that is related specifically to the visualization of the model. Controller - The controller takes the view and displays it in a Window using OpenGL (but it could potentially be any 3D graphics package). The application has various "layouts" that change the position of the vertices in the display. For instance, one layout may arrange the vertices in a circle. Is it common for these layouts to access and change the view directly? Should they go through the Controller to access the View? If they go through the Controller, should they just ask for direct access to the View or should each change go through the controller? I realize this is a bit different from the standard MVC example where there a finite number of Views. In this case, the View can change in an infinite number of ways. Perhaps I'm shattering some basic principle of MVC here. Thanks in advance!

    Read the article

  • How to Assure an Effective Data Model

    As a general rule in my opinion the effectiveness of a data model can be directly related to the accuracy and complexity of a project’s requirements. For example there is no need to work on very detailed data models when the details surrounding a specific data model have not been defined or even clarified. Developing data models when the clarity of project requirements is limited tends to introduce designed issues because the proper details to create an effective data model are not even known. One way to avoid this issue is to create data models that correspond to the complexity of the existing project requirements so that when requirements are updated then new data models can be created based any new discoveries regarding requirements on a fine grain level.  This allows for data models to be composed of general entities to be created initially when a project’s requirements are very vague and then the entities are refined as new and more substantial requirements are defined or redefined. This promotes communication amongst all stakeholders within a project as they go through the process of defining and finalizing project requirements.In addition, here are some general tips that can be applied to projects in regards to data modeling.Initially model all data generally and slowly reactor the data model as new requirements and business constraints are applied to a project.Ensure that data modelers have the proper tools and training they need to design a data model accurately.Create a common location for all project documents so that everyone will be able to review a project’s data models along with any other project documentation.All data models should follow a clear naming schema that tells readers the intended purpose for the data and how it is going to be applied within a project.

    Read the article

  • Sharding / indexing strategy for multi-faceted search

    - by Graham
    I'm currently thinking about our database structure and how we modify it for scale. Specifically, we're thinking about using ElasticSearch to provide our search functionality. One common pattern with ElasticSearch seems to be the 'user-routing' pattern; that is, using routing to ensure that any one user's data resides on the same shard. This is great for client-specific search e.g. Gmail. Our application has a constraint such that any user will have a maximum of a few thousand documents, so this pattern seems like a good candidate. However, our search needs to work across all users, as well as targeting a specific user (so I might search my content, Alice's content, or all content). Similarly, we need to provide full-text search across any timeframe; recent months to several years ago. I'm thinking of combining the 'user-routing' and 'index-per-time-interval' patterns: I create an index for each month By default, searches are aliased against the most recent X months If no results are found, we can search against previous X months As we grow, we can reduce the interval X Each document is routed by the user ID So, this should let us do the following: search by user. This will search all indeces across 1 shard search by time. This will search ~2 indeces (by default) across all shards Is this a reasonable approach, considering we may scale to multi-million+ documents? Or should I be denormalizing the data somehow, so that user searches are performed on a totally seperate index from date searches? Thanks for any pros-cons of the above scenario.

    Read the article

  • SQL Server Database Settings

    - by rbishop
    For those using Data Relationship Management on Oracle DB this does not apply, but for those using Microsoft SQL Server it is highly recommended that you run with Snapshot Isolation Mode. The Data Governance module will not function correctly without this mode enabled. All new Data Relationship Management repositories are created with this mode enabled by default. This mode makes SQL Server (2005+) behave more like Oracle DB where readers simply see older versions of rows while a write is in progress, instead of readers being blocked by locks while a write takes place. Many common sources of deadlocks are eliminated. For example, if one user starts a 5 minute transaction updating half the rows in a table, without snapshot isolation everyone else reading the table will be blocked waiting. With snapshot isolation, they will see the rows as they were before the write transaction started. Conversely, if the readers had started first, the writer won't be stuck waiting for them to finish reading... the writes can begin immediately without affecting the current transactions. To make this change, make sure no one is using the target database (eg: put it into single-user mode), then run these commands: ALTER DATABASE [DB] SET ALLOW_SNAPSHOT_ISOLATION ONALTER DATABASE [DB] SET READ_COMMITTED_SNAPSHOT ON Please make sure you coordinate with your DBA team to ensure tempdb is appropriately setup to support snapshot isolation mode, as the extra row versions are stored in tempdb until the transactions are committed. Let me take this opportunity to extremely strongly highly recommend that you use solid state storage for your databases with appropriate iSCSI, FiberChannel, or SAN bandwidth. The performance gains are significant and there is no excuse for not using 100% solid state storage in 2013. Actually unless you need to store petabytes of archival data, there is no excuse for using hard drives in any systems, whether laptops, desktops, application servers, or database servers. The productivity benefits alone are tremendous, not to mention power consumption, heat, etc.

    Read the article

< Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >