Search Results

Search found 2093 results on 84 pages for 'logical'.

Page 75/84 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • How many developers before continuous integration becomes effective for us?

    - by Carnotaurus
    There is an overhead associated with continuous integration, e.g., set up, re-training, awareness activities, stoppage to fix "bugs" that turn out to be data issues, enforced separation of concerns programming styles, etc. At what point does continuous integration pay for itself? EDIT: These were my findings The set-up was CruiseControl.Net with Nant, reading from VSS or TFS. Here are a few reasons for failure, which have nothing to do with the setup: Cost of investigation: The time spent investigating whether a red light is due a genuine logical inconsistency in the code, data quality, or another source such as an infrastructure problem (e.g., a network issue, a timeout reading from source control, third party server is down, etc., etc.) Political costs over infrastructure: I considered performing an "infrastructure" check for each method in the test run. I had no solution to the timeout except to replace the build server. Red tape got in the way and there was no server replacement. Cost of fixing unit tests: A red light due to a data quality issue could be an indicator of a badly written unit test. So, data dependent unit tests were re-written to reduce the likelihood of a red light due to bad data. In many cases, necessary data was inserted into the test environment to be able to accurately run its unit tests. It makes sense to say that by making the data more robust then the test becomes more robust if it is dependent on this data. Of course, this worked well! Cost of coverage, i.e., writing unit tests for already existing code: There was the problem of unit test coverage. There were thousands of methods that had no unit tests. So, a sizeable amount of man days would be needed to create those. As this would be too difficult to provide a business case, it was decided that unit tests would be used for any new public method going forward. Those that did not have a unit test were termed 'potentially infra red'. An intestesting point here is that static methods were a moot point in how it would be possible to uniquely determine how a specific static method had failed. Cost of bespoke releases: Nant scripts only go so far. They are not that useful for, say, CMS dependent builds for EPiServer, CMS, or any UI oriented database deployment. These are the types of issues that occured on the build server for hourly test runs and overnight QA builds. I entertain that these to be unnecessary as a build master can perform these tasks manually at the time of release, esp., with a one man band and a small build. So, single step builds have not justified use of CI in my experience. What about the more complex, multistep builds? These can be a pain to build, especially without a Nant script. So, even having created one, these were no more successful. The costs of fixing the red light issues outweighed the benefits. Eventually, developers lost interest and questioned the validity of the red light. Having given it a fair try, I believe that CI is expensive and there is a lot of working around the edges instead of just getting the job done. It's more cost effective to employ experienced developers who do not make a mess of large projects than introduce and maintain an alarm system. This is the case even if those developers leave. It doesn't matter if a good developer leaves because processes that he follows would ensure that he writes requirement specs, design specs, sticks to the coding guidelines, and comments his code so that it is readable. All this is reviewed. If this is not happening then his team leader is not doing his job, which should be picked up by his manager and so on. For CI to work, it is not enough to just write unit tests, attempt to maintain full coverage, and ensure a working infrastructure for sizable systems. The bottom line: One might question whether fixing as many bugs before release is even desirable from a business prespective. CI involves a lot of work to capture a handful of bugs that the customer could identify in UAT or the company could get paid for fixing as part of a client service agreement when the warranty period expires anyway.

    Read the article

  • How do I mount my External HDD with filesystem type errors?

    - by Snuggie
    I am a relatively new Ubuntu user and I am having some difficulty mounting my external 2TB HDD. When I first installed Linux my external HDD was working just fine, however, it has stopped working and I have a lot of important files on there that I need. Before my HDD would automatically mount and no worries. Now, however, it doesn't automatically mount and when I try to manually mount it I keep running into filesystem type errors that I can't seem to get past. Below are images that depict my step by step process of how I am trying to mount my HDD along with the errors I am receiving. If anybody has any idea what I am doing wrong or how to correct the issue I would greatly appreciate it. Step 1) Ensure the computer recognizes my external HDD. pj@PJ:~$ dmesg ... [ 5790.367910] scsi 7:0:0:0: Direct-Access WD My Passport 0748 1022 PQ: 0 ANSI: 6 [ 5790.368278] scsi 7:0:0:1: Enclosure WD SES Device 1022 PQ: 0 ANSI: 6 [ 5790.370122] sd 7:0:0:0: Attached scsi generic sg2 type 0 [ 5790.370310] ses 7:0:0:1: Attached Enclosure device [ 5790.370462] ses 7:0:0:1: Attached scsi generic sg3 type 13 [ 5792.971601] sd 7:0:0:0: [sdb] 3906963456 512-byte logical blocks: (2.00 TB/1.81 TiB) [ 5792.972148] sd 7:0:0:0: [sdb] Write Protect is off [ 5792.972162] sd 7:0:0:0: [sdb] Mode Sense: 47 00 10 08 [ 5792.972591] sd 7:0:0:0: [sdb] No Caching mode page found [ 5792.972605] sd 7:0:0:0: [sdb] Assuming drive cache: write through [ 5792.975235] sd 7:0:0:0: [sdb] No Caching mode page found [ 5792.975249] sd 7:0:0:0: [sdb] Assuming drive cache: write through [ 5792.987504] sdb: sdb1 [ 5792.988900] sd 7:0:0:0: [sdb] No Caching mode page found [ 5792.988911] sd 7:0:0:0: [sdb] Assuming drive cache: write through [ 5792.988920] sd 7:0:0:0: [sdb] Attached SCSI disk Step 2) Check if it mounted properly (it does not) pj@PJ:~$ df -ah Filesystem Size Used Avail Use% Mounted on /dev/sda1 682G 3.9G 644G 1% / proc 0 0 0 - /proc sysfs 0 0 0 - /sys none 0 0 0 - /sys/fs/fuse/connections none 0 0 0 - /sys/kernel/debug none 0 0 0 - /sys/kernel/security udev 2.9G 4.0K 2.9G 1% /dev devpts 0 0 0 - /dev/pts tmpfs 1.2G 928K 1.2G 1% /run none 5.0M 0 5.0M 0% /run/lock none 2.9G 156K 2.9G 1% /run/shm gvfs-fuse-daemon 0 0 0 - /home/pj/.gvfs Step 3) Try mounting manually using NTFS and VFAT (both as SDB and SDB1) pj@PJ:~$ sudo mount /dev/sdb /media/Passport/ NTFS signature is missing. Failed to mount '/dev/sdb': Invalid argument The device '/dev/sdb' doesn't seem to have a valid NTFS. Maybe the wrong device is used? Or the whole disk instead of a partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around? pj@PJ:~$ sudo mount /dev/sdb1 /media/Passport/ NTFS signature is missing. Failed to mount '/dev/sdb1': Invalid argument The device '/dev/sdb1' doesn't seem to have a valid NTFS. Maybe the wrong device is used? Or the whole disk instead of a partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around? pj@PJ:~$ sudo mount -t ntfs /dev/sdb /media/Passport/ NTFS signature is missing. Failed to mount '/dev/sdb': Invalid argument The device '/dev/sdb' doesn't seem to have a valid NTFS. Maybe the wrong device is used? Or the whole disk instead of a partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around? pj@PJ:~$ sudo mount -t vfat /dev/sdb /media/Passport/ mount: wrong fs type, bad option, bad superblock on /dev/sdb, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so pj@PJ:~$ sudo mount -t ntfs /dev/sdb1 /media/Passport/ NTFS signature is missing. Failed to mount '/dev/sdb1': Invalid argument The device '/dev/sdb1' doesn't seem to have a valid NTFS. Maybe the wrong device is used? Or the whole disk instead of a partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around? pj@PJ:~$ sudo mount -t vfat /dev/sdb1 /media/Passport/ mount: wrong fs type, bad option, bad superblock on /dev/sdb1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so

    Read the article

  • Best Practices - Dynamic Reconfiguration

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (formerly named Logical Domains) Overview of dynamic Reconfiguration Oracle VM Server for SPARC supports Dynamic Reconfiguration (DR), making it possible to add or remove resources to or from a domain (virtual machine) while it is running. This is extremely useful because resources can be shifted to or from virtual machines in response to load conditions without having to reboot or interrupt running applications. For example, if an application requires more CPU capacity, you can add CPUs to improve performance, and remove them when they are no longer needed. You can use even use Dynamic Resource Management (DRM) policies that automatically add and remove CPUs to domains based on load. How it works (in broad general terms) Dynamic Reconfiguration is done in coordination with Solaris, which recognises a hypervisor request to change its virtual machine configuration and responds appropriately. In essence, Solaris receives a message saying "you now have 16 more CPUs numbered 16 to 31" or "8GB more RAM starting at address X" or "here's a new network or disk device - have fun with it". These actions take very little time. Solaris then can start using the new resource. In the case of added CPUs, that means dispatching processes and potentially binding interrupts to the new CPUs. For memory, Solaris adds the new memory pages to its "free" list and starts using them. Comparable actions occur with network and disk devices: they are recognised by Solaris and then used. Removing is the reverse process: after receiving the DR message to free specific CPUs, Solaris unbinds interrupts assigned to the CPUs and stops dispatching process threads. That takes very little time. primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 1.0% 6d 22h 29m ldom1 active -n---- 5000 16 8G 0.9% 6h 59m primary # ldm set-core 5 ldom1 primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 0.2% 6d 22h 29m ldom1 active -n---- 5000 40 8G 0.1% 6h 59m primary # ldm set-core 2 ldom1 primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 1.0% 6d 22h 29m ldom1 active -n---- 5000 16 8G 0.9% 6h 59m Memory pages are vacated by copying their contents to other memory locations and wiping them clean. Solaris may have to swap memory contents to disk if the remaining RAM isn't enough to hold all the contents. For this reason, deallocating memory can take longer on a loaded system. Even on a lightly loaded system it took several 7 or 8 seconds to switch the domain below between 8GB and 24GB of RAM. primary # ldm set-mem 24g ldom1 primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 0.1% 6d 22h 36m ldom1 active -n---- 5000 16 24G 0.2% 7h 6m primary # ldm set-mem 8g ldom1 primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 0.7% 6d 22h 37m ldom1 active -n---- 5000 16 8G 0.3% 7h 7m What if the device is in use? (this is the anecdote that inspired this blog post) If CPU or memory is being removed, releasing it pretty straightforward, using the method described above. The resources are released, and Solaris continues with less capacity. It's not as simple with a network or I/O device: you don't want to yank a device out from underneath an application that might be using it. In the following example, I've added a virtual network device to ldom1 and want to take it away, even though it's been plumbed. primary # ldm rm-vnet vnet19 ldom1 Guest LDom returned the following reason for failing the operation: Resource Information ---------------------------------------------------------- ----------------------- /devices/virtual-devices@100/channel-devices@200/network@1 Network interface net1 VIO operation failed because device is being used in LDom ldom1 Failed to remove VNET instance That's what I call a helpful error message - telling me exactly what was wrong. In this case the problem is easily solved. I know this NIC is seen in the guest as net1 so: ldom1 # ifconfig net1 down unplumb Now I can dispose of it, and even the virtual switch I had created for it: primary # ldm rm-vnet vnet19 ldom1 primary # ldm rm-vsw primary-vsw9 If I had to take away the device disruptively, I could have used ldm rm-vnet -f but that could disrupt whoever was using it. It's better if that can be avoided. Summary Oracle VM Server for SPARC provides dynamic reconfiguration, which lets you modify a guest domain's CPU, memory and I/O configuration on the fly without reboot. You can add and remove resources as needed, and even automate this for CPUs by setting up resource policies. Taking things away can be more complicated than giving, especially for devices like disks and networks that may contain application and system state or be involved in a transaction. LDoms and Solaris cooperative work together to coordinate resource allocation and de-allocation in a safe and effective way. For best practices, use dynamic reconfiguration to make the best use of your system's resources.

    Read the article

  • Managing Your First SharePoint Project or Team

    - by Mark Rackley
    (*editor’s note* If you have proper SharePoint Training, know the difference between a site and a site collection, and have the utmost respect for the knowledge of your SharePoint team skip this blog and go directly to meetdux.com, do not pass go, do not collect $200… otherwise, please proceed) Dear Mr. or Mrs. I-know-nothing-about-SharePoint-but-hey,-I-have-manager-in-my-title-so-I’ll-tell-you-how-to-your-job, Thank you so much for joining the Acme corporation. We appreciate your eagerness and willingness to jump in and help us accomplish all of our goals here at acme (these roadrunner rockets don’t make themselves). You may have noticed that we have this thing called SharePoint lying around and we have invested some time in money to make it not a complete piece of garbage. So, I thought I’d give you some pointers to help make your stay here enjoyable and productive. Yeah… you don’t really know SharePoint Just because you had a mysite at your last organization or had a SharePoint 2003 team site does NOT mean you comprehend the vastness that is SharePoint. You don’t know what’s going on behind the scenes. You don’t know what should and should not be done. No, we CAN’T just query the SQL database directly. Yes, it really does take that long. No, we can’t do that out-of-the-box. Your experience doesn’t mean as much as you think it means… Yes, I’m aware that you co-created the internet with Al Gore and have been managing projects since I was blowing up GI Joe figures with firecrackers, however SharePoint is not like anything you have worked with before from a management perspective. Please don’t tell us the proper way to do our job or tell us how “you” would do it, and PLEASE don’t utter the words “I used to do some .NET development so let me know if you get stuck and need some guidance.” It MAY be possible for a incredible project manager to manage a SharePoint project and not understand the technology, but if you force your ideas on us or treat us like we don’t really know what we’re doing then you will prove yourself to NOT be one of those types. Oh no you didn’t… Please don’t tell us how you can bring in a group of guys of Kazakhstan to do the project for $20/hr. There are many companies out there who can do some really crappy SharePoint work and we don’t want to be stuck maintaining their junk. Do you know what it means to deploy a solution? Neither do some of those companies out there. However, there are are few AWESOME consulting firms out there but $150/hr is cheap for these guys. Believe me, it’s worth it though. You get what you pay for! Show us some respect We truly do appreciate and value your opinion and experience, but when we tell you something is different in SharePoint don’t be condescending and dismiss OUR experience and opinions. We have spent a lot of time and energy learning a very complicated technology that can open up a world of possibilities when used properly. We just want to make sure it is used properly. It’s not the same as .NET development. It’s not like a regular web application. There’s more going on behind the scenes than you can possibly fathom. Have a little faith in us please and listen when we talk. You may actually learn a thing or two. Take some time to learn the technology There is hope… you don’t have to be totally worthless. Take some time to learn SharePoint. Learn what it is and what it can do. Invest some time in learning our SharePoint environment. What’s our logical architecture and taxonomy? What governance do we have in place? If you just thought “huh?” then yes, I’m talking to you. Sincerely, Your SharePoint Team (This rant is not pointed at any particular organization or person. If you think it’s about you, you are wrong. This is just a general rant based upon things people have told me and things I’ve seen. If you don’t think it applies to you, please move on. If you think you might be guilty of handling your SharePoint team the wrong way, then just please listen, learn, and have a little faith in your team. You all have the same goal in mind. Also, take the time to learn something about SharePoint, you will all be less frustrated with each other.)

    Read the article

  • PanelGridLayout - A Layout Revolution

    - by Duncan Mills
    With the most recent 11.1.2 patchset (11.1.2.3) there has been a lot of excitement around ADF Essentials (and rightly so), however, in all the fuss I didn't want an even more significant change to get missed - yes you read that correctly, a more significant change! I'm talking about the new panelGridLayout component, I can confidently say that this one of the most revolutionary components that we've introduced in 11g, even though it sounds rather boring. To be totally accurate, panelGrid was introduced in 11.1.2.2 but without any presence in the component palette or other design time support, so it was largely missed unless you read the release notes. However in this latest patchset it's finally front and center. Its time to explore - we (really) need to talk about layout.  Let's face it,with ADF Faces rich client, layout is a rather arcane pursuit, once you are a layout master, all bow before you, but it's more of an art than a science, and it is often, in fact, way too difficult to achieve what should (apparently) be a pretty simple. Here's a great example, it's a homework assignment I set for folks I'm teaching this stuff to:  The requirements for this layout are: The header is 80px high, the footer is 30px. These are both fixed.  The first section of the header containing the logo is 180px wide The logo is centered within the top left hand corner of the header  The title text is start aligned in the center zone of the header and will wrap if the browser window is narrowed. It should be aligned in the center of the vertical space  The about link is anchored to the right hand side of the browser with a 20px gap and again is center aligned vertically. It will move as the browser window is reduced in width. The footer has a right aligned copyright statement, again middle aligned within a 30px high footer region and with a 20px buffer to the right hand edge. It will move as the browser window is reduced in width. All remaining space is given to a central zone, which, in this case contains a panelSplitter. Expect that at some point in time you'll need a separate messages line in the center of the footer.  In the homework assigment I set I also stipulate that no inlineStyles can be used to control alignment or margins and no use of other taglibs (e.g. JSF HTML or Trinidad HTML). So, if we take this purist approach, that basic page layout (in my stock solution) requires 3 panelStretchLayouts, 5 panelGroupLayouts and 4 spacers - not including the spacer I use for the logo and the contents of the central zone splitter - phew! The point is that even a seemingly simple layout needs a bit of thinking about, particulatly when you consider strechting and browser re-size behavior. In fact, this little sample actually teaches you much of what you need to know to become vaguely competant at layouts in the framework. The underlying result of "the way things are" is that most of us reach for panelStretchLayout before even finishing the first sip of coffee as we embark on a new page design. In fact most pages you will see in any moderately complex ADF page will basically be nested panelStretchLayouts and panelGroupLayouts, sometimes many, many levels deep. So this is a problem, we've known this for some time and now we have a good solution. (I should point out that the oft-used Trinidad trh tags are not a particularly good solution as you're tie-ing yourself to an HTML table based layout in that case with a host of attendent issues in resize and bi-di behavior, but I digress.) So, tadaaa, I give to you panelGridLayout. PanelGrid, as the name suggests takes a grid like (dare I say slightly gridbag-like) approach to layout, dividing your layout into rows and colums with margins, sizing, stretch behaviour, colspans and rowspans all rolled in, all without the use of inlineStyle. As such, it provides for a much more powerful and consise way of defining a layout such as the one above that is actually simpler and much more logical to design. The basic building blocks are the panelGridLayout itself, gridRow and gridCell. Your content sits inside the cells inside the rows, all helpfully allowing both streching, valign and halign definitions without the need to nest further panelGroupLayouts. So much simpler!  If I break down the homework example above my nested comglomorate of 12 containers and spacers can be condensed down into a single panelGrid with 3 rows and 5 cell definitions (39 lines of source reduced to 24 in the case of the sample). What's more, the actual runtime representation in the browser DOM is much, much simpler, and clean, with basically one DIV per cell (Note that just because the panelGridLayout semantics looks like an HTML table does not mean that it's rendered that way!) . Another hidden benefit is the runtime cost. Because we can use a single layout to achieve much more complex geometries the client side layout code inside the browser is having to work a lot less. This will be a real benefit if your application needs to run on lower powered clients such as netbooks or tablets. So, it's time, if you're on 11.1.2.2 or above, to smile warmly at your panelStretchLayouts, wrap the blanket around it's knees and wheel it off to the Sunset Retirement Home for a well deserved rest. There's a new kid on the block and it wants to be your friend. 

    Read the article

  • Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2) is Available Now !

    - by Anand Akela
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Oracle today announced the availability of Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2). It is now available for download on OTN on ALL platforms. This is the first major release since the launch of Enterprise Manager 12c in October of 2011. This is the first time when Enterprise Manager release is available on all platforms simultaneously. This is primarily a stability release which incorporates many of issues and feedback reported by early adopters. In addition, this release contains many new features and enhancements in areas across the board. New Capabilities and Features Enhanced management capabilities for enterprise private clouds: Introduces new capabilities to allow customers to build and manage a Java Platform-as-a-Service (PaaS) cloud based on Oracle Weblogic Server. The new capabilities include guided set up of PaaS Cloud, self-service provisioning, automatic scale out and metering and chargeback. Enhanced lifecycle management capabilities for Oracle WebLogic Server environments: Combining in-context multiple domain, patching and configuration file synchronizations. Integrated Hardware-Software management for Oracle Exalogic Elastic Cloud through features such as rack schematics visualization and integrated monitoring of all hardware and software components. The latest management capabilities for business-critical applications include: Business Application Management: A new Business Application (BA) target type and dashboard with flexible definitions provides a logical view of an application’s business transactions, end-user experiences and the cloud infrastructure the monitored application is running on. Enhanced User Experience Reporting: Oracle Real User Experience Insight has been enhanced to provide reporting capabilities on client-side issues for applications running in the cloud and has been more tightly coupled with Oracle Business Transaction Management to help ensure that real-time user experience and transaction tracing data is provided to users in context. Several key improvements address ease of administration, reporting and extensibility for massively scalable cloud environments including dynamic groups, self-updateable monitoring templates, bulk operations against many events, etc. New and Revised Plug-Ins: Several plug-Ins have been updated as a part of this release resulting in either new versions or revisions. Revised plug-ins contain only bug-fixes and while new plug-ins incorporate both bug fixes as well as new functionality. Plug-In Name Version Enterprise Manager for Oracle Database 12.1.0.2 (revision) Enterprise Manager for Oracle Fusion Middleware 12.1.0.3 (new) Enterprise Manager for Chargeback and Capacity Planning 12.1.0.3 (new) Enterprise Manager for Oracle Fusion Applications 12.1.0.3 (new) Enterprise Manager for Oracle Virtualization 12.1.0.3 (new) Enterprise Manager for Oracle Exadata 12.1.0.3 (new) Enterprise Manager for Oracle Cloud 12.1.0.4 (new) Installation and Upgrade: All major platforms have been released simultaneously (Linux 32 / 64 bit, Solaris (SPARC), Solaris x86-64, IBM AIX 64-bit, and Windows x86-64 (64-bit) ) Enterprise Manager 12.1.0.2 is a complete release that includes both the EM OMS and Agent versions of 12.1.0.2. Installation options available with EM 12.1.0.2: User can do fresh Install or an upgrade from versions EM 10.2.0.5, 11.1, or 12.1.0.2 ( Bundle Patch 1 not mandatory). Upgrading to EM 12.1.0.2 from EM 12.1.0.1 is not a patch application (similar to Bundle Patch 1) but is achieved through a 1-system upgrade. Documentation: Oracle Enterprise Manager Cloud Control Introduction Document provides a broad overview of capabilities and highlights"What's New" in EM 12.1.0.2. All updated Oracle Enterprise Manager documentation can be found on OTN Upgrade Guide Please feel free to ask questions related to the new Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2) at the Oracle Enterprise Manager Forum . You could also share your feedback at twitter  using hash tag #em12c or at Facebook . Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Thinking differently about BI delivery

    - by jamiet
    My day job involves implementing Business Intelligence (BI) solutions which, as I have said before, is simply about giving people the information they need to do their jobs. I’m always interested in learning about new ways of achieving that aim and that is my motivation for writing blog entries that are not concerned with SQL or SQL Server per se. Implementing BI systems usually involves hacking together a bunch third party products with some in-house “glue” and delivering information using some shiny, expensive web-based front-end tool; the list of vendors that supply such tools is big and ever-growing. No doubt these tools have their place and of late I have started to wonder whether they can be supplemented with different ways of delivering information. The problem I have with these separate web-based tools is exactly that – they are separate web-based tools. What’s the problem with that you might ask? I’ll explain! They force the information worker to go somewhere unfamiliar in order to get the information they need to do their jobs. Would it not be better if we could deliver information into the tools that those information workers are already using and not force them to go somewhere else? I look at the rise of blogging over recent years and I realise that what made them popular is that people can subscribe to RSS feeds and have information pushed to them in their tool of choice rather than them having to go and find the information for themselves in a tool that has been foisted upon them. Would it not be a good idea to adopt the principle of subscription for the benefit of delivering BI information as well? I think it would and in the rest of this blog entry I’ll outline such a scenario where the power of subscription could be used to enhance the delivery of information to information workers. Typical questions that information workers ask might be: What are my year-on-year sales figures? What was my footfall yesterday? How many widgets have I sold so far today? Each of those questions includes a time element and that shouldn’t surprise us, any BI system that I have worked on includes the dimension of time. Now, what do people use to view and organise their time-oriented information? Its not a trick question, they use a calendar and in the enterprise space more often than not that calendar is managed using Outlook. Given then that information workers are already looking at their calendar in Outlook anyway would it not make sense then to deliver information into that same calendar? Of course it would. Calendars are a great way of visualising information such as sales figures. Observe: Just in this single screenshot I have managed to convey a multitude of information. The information worker can see, at a glance, information about hourly/daily/weekly/monthly sales and, moreover, he/she is viewing that information right inside the tool that they use every day. There is no effort on the part of him/her, the information just appears hour after hour, day after day. Taking the idea further, each one of those calendar items could be a mini-dashboard in its own right. Double-clicking on an item could show a plethora of other information about that time slot such as breaking the sales down per region or year-over-year comparisons. Perhaps the title could employ a sparkline? Loads of possibilities. The point is that calendars are a completely natural way to visualise information; we should make more use of them! The real beauty of delivering information using calendars for us BI developers is that it should be so easy. In the case of Outlook we don’t need to write complicated VBA code that can go and manipulate a person’s calendar, simply publishing data in a format that Outlook can understand is sufficient and happily such formats already exist; iCalendar is the accepted format and the even more flexible xCalendar is hopefully on its way as well.   I’d like to make one last point and this one is with my SQL Server hat on. Reporting Services 2008 R2 introduced the ability to publish data as subscribable Atom feeds so it seems logical that it could also be a vehicle for delivering calendar feeds too. If you think this would be a good idea go and vote for it at Publish data as iCalendar feeds and please please please add some comments (especially if you vote it down). Work smarter, not harder! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Joining on NULLs

    - by Dave Ballantyne
    A problem I see on a fairly regular basis is that of dealing with NULL values.  Specifically here, where we are joining two tables on two columns, one of which is ‘optional’ ie is nullable.  So something like this: i.e. Lookup where all the columns are equal, even when NULL.   NULL’s are a tricky thing to initially wrap your mind around.  Statements like “NULL is not equal to NULL and neither is it not not equal to NULL, it’s NULL” can cause a serious brain freeze and leave you a gibbering wreck and needing your mummy. Before we plod on, time to setup some data to demo against. Create table #SourceTable ( Id integer not null, SubId integer null, AnotherCol char(255) not null ) go create unique clustered index idxSourceTable on #SourceTable(id,subID) go with cteNums as ( select top(1000) number from master..spt_values where type ='P' ) insert into #SourceTable select Num1.number,nullif(Num2.number,0),'SomeJunk' from cteNums num1 cross join cteNums num2 go Create table #LookupTable ( Id integer not null, SubID integer null ) go insert into #LookupTable Select top(100) id,subid from #SourceTable where subid is not null order by newid() go insert into #LookupTable Select top(3) id,subid from #SourceTable where subid is null order by newid() If that has run correctly, you will have 1 million rows in #SourceTable and 103 rows in #LookupTable.  We now want to join one to the other. First attempt – Lets just join select * from #SourceTable join #LookupTable on #LookupTable.id = #SourceTable.id and #LookupTable.SubID = #SourceTable.SubID OK, that’s a fail.  We had 100 rows back,  we didn’t correctly account for the 3 rows that have null values.  Remember NULL <> NULL and the join clause specifies SUBID=SUBID, which for those rows is not true. Second attempt – Lets deal with those pesky NULLS select * from #SourceTable join #LookupTable on #LookupTable.id = #SourceTable.id and isnull(#LookupTable.SubID,0) = isnull(#SourceTable.SubID,0) OK, that’s the right result, well done and 99.9% of the time that is where its left. It is a relatively trivial CPU overhead to wrap ISNULL around both columns and compare that result, so no problems.  But, although that’s true, this a relational database we are using here, not a procedural language.  SQL is a declarative language, we are making a request to the engine to get the results we want.  How we ask for them can make a ton of difference. Lets look at the plan for our second attempt, specifically the clustered index seek on the #SourceTable   There are 2 predicates. The ‘seek predicate’ and ‘predicate’.  The ‘seek predicate’ describes how SQLServer has been able to use an Index.  Here, it has been able to navigate the index to resolve where ID=ID.  So far so good, but what about the ‘predicate’ (aka residual probe) ? This is a row-by-row operation.  For each row found in the index matching the Seek Predicate, the leaf level nodes have been scanned and tested using this logical condition.  In this example [Expr1007] is the result of the IsNull operation on #LookupTable and that is tested for equality with the IsNull operation on #SourceTable.  This residual probe is quite a high overhead, if we can express our statement slightly differently to take full advantage of the index and make the test part of the ‘Seek Predicate’. Third attempt – X is null and Y is null So, lets state the query in a slightly manner: select * from #SourceTable join #LookupTable on #LookupTable.id = #SourceTable.id and ( #LookupTable.SubID = #SourceTable.SubID or (#LookupTable.SubID is null and #SourceTable.SubId is null) ) So its slightly wordier and may not be as clear in its intent to the human reader, that is what comments are for, but the key point is that it is now clearer to the query optimizer what our intention is. Let look at the plan for that query, again specifically the index seek operation on #SourceTable No ‘predicate’, just a ‘Seek Predicate’ against the index to resolve both ID and SubID.  A subtle difference that can be easily overlooked.  But has it made a difference to the performance ? Well, yes , a perhaps surprisingly high one. Clever query optimizer well done. If you are using a scalar function on a column, you a pretty much guaranteeing that a residual probe will be used.  By re-wording the query you may well be able to avoid this and use the index completely to resolve lookups. In-terms of performance and scalability your system will be in a much better position if you can.

    Read the article

  • Increasing efficiency of N-Body gravity simulation

    - by Postman
    I'm making a space exploration type game, it will have many planets and other objects that will all have realistic gravity. I currently have a system in place that works, but if the number of planets goes above 70, the FPS decreases an practically exponential rates. I'm making it in C# and XNA. My guess is that I should be able to do gravity calculations between 100 objects without this kind of strain, so clearly my method is not as efficient as it should be. I have two files, Gravity.cs and EntityEngine.cs. Gravity manages JUST the gravity calculations, EntityEngine creates an instance of Gravity and runs it, along with other entity related methods. EntityEngine.cs public void Update() { foreach (KeyValuePair<string, Entity> e in Entities) { e.Value.Update(); } gravity.Update(); } (Only relevant piece of code from EntityEngine, self explanatory. When an instance of Gravity is made in entityEngine, it passes itself (this) into it, so that gravity can have access to entityEngine.Entities (a dictionary of all planet objects)) Gravity.cs namespace ExplorationEngine { public class Gravity { private EntityEngine entityEngine; private Vector2 Force; private Vector2 VecForce; private float distance; private float mult; public Gravity(EntityEngine e) { entityEngine = e; } public void Update() { //First loop foreach (KeyValuePair<string, Entity> e in entityEngine.Entities) { //Reset the force vector Force = new Vector2(); //Second loop foreach (KeyValuePair<string, Entity> e2 in entityEngine.Entities) { //Make sure the second value is not the current value from the first loop if (e2.Value != e.Value ) { //Find the distance between the two objects. Because Fg = G * ((M1 * M2) / r^2), using Vector2.Distance() and then squaring it //is pointless and inefficient because distance uses a sqrt, squaring the result simple cancels that sqrt. distance = Vector2.DistanceSquared(e2.Value.Position, e.Value.Position); //This makes sure that two planets do not attract eachother if they are touching, completely unnecessary when I add collision, //For now it just makes it so that the planets are not glitchy, performance is not significantly improved by removing this IF if (Math.Sqrt(distance) > (e.Value.Texture.Width / 2 + e2.Value.Texture.Width / 2)) { //Calculate the magnitude of Fg (I'm using my own gravitational constant (G) for the sake of time (I know it's 1 at the moment, but I've been changing it) mult = 1.0f * ((e.Value.Mass * e2.Value.Mass) / distance); //Calculate the direction of the force, simply subtracting the positions and normalizing works, this fixes diagonal vectors //from having a larger value, and basically makes VecForce a direction. VecForce = e2.Value.Position - e.Value.Position; VecForce.Normalize(); //Add the vector for each planet in the second loop to a force var. Force = Vector2.Add(Force, VecForce * mult); //I have tried Force += VecForce * mult, and have not noticed much of an increase in speed. } } } //Add that force to the first loop's planet's position (later on I'll instead add to acceleration, to account for inertia) e.Value.Position += Force; } } } } I have used various tips (about gravity optimizing, not threading) from THIS question (that I made yesterday). I've made this gravity method (Gravity.Update) as efficient as I know how to make it. This O(N^2) algorithm still seems to be eating up all of my CPU power though. Here is a LINK (google drive, go to File download, keep .Exe with the content folder, you will need XNA Framework 4.0 Redist. if you don't already have it) to the current version of my game. Left click makes a planet, right click removes the last planet. Mouse moves the camera, scroll wheel zooms in and out. Watch the FPS and Planet Count to see what I mean about performance issues past 70 planets. (ALL 70 planets must be moving, I've had 100 stationary planets and only 5 or so moving ones while still having 300 fps, the issue arises when 70+ are moving around) After 70 planets are made, performance tanks exponentially. With < 70 planets, I get 330 fps (I have it capped at 300). At 90 planets, the FPS is about 2, more than that and it sticks around at 0 FPS. Strangely enough, when all planets are stationary, the FPS climbs back up to around 300, but as soon as something moves, it goes right back down to what it was, I have no systems in place to make this happen, it just does. I considered multithreading, but that previous question I asked taught me a thing or two, and I see now that that's not a viable option. I've also thought maybe I could do the calculations on my GPU instead, though I don't think it should be necessary. I also do not know how to do this, it is not a simple concept and I want to avoid it unless someone knows a really noob friendly simple way to do it that will work for an n-body gravity calculation. (I have an NVidia gtx 660) Lastly I've considered using a quadtree type system. (Barnes Hut simulation) I've been told (in the previous question) that this is a good method that is commonly used, and it seems logical and straightforward, however the implementation is way over my head and I haven't found a good tutorial for C# yet that explains it in a way I can understand, or uses code I can eventually figure out. So my question is this: How can I make my gravity method more efficient, allowing me to use more than 100 objects (I can render 1000 planets with constant 300+ FPS without gravity calculations), and if I can't do much to improve performance (including some kind of quadtree system), could I use my GPU to do the calculations?

    Read the article

  • How can I estimate the entropy of a password?

    - by Wug
    Having read various resources about password strength I'm trying to create an algorithm that will provide a rough estimation of how much entropy a password has. I'm trying to create an algorithm that's as comprehensive as possible. At this point I only have pseudocode, but the algorithm covers the following: password length repeated characters patterns (logical) different character spaces (LC, UC, Numeric, Special, Extended) dictionary attacks It does NOT cover the following, and SHOULD cover it WELL (though not perfectly): ordering (passwords can be strictly ordered by output of this algorithm) patterns (spatial) Can anyone provide some insight on what this algorithm might be weak to? Specifically, can anyone think of situations where feeding a password to the algorithm would OVERESTIMATE its strength? Underestimations are less of an issue. The algorithm: // the password to test password = ? length = length(password) // unique character counts from password (duplicates discarded) uqlca = number of unique lowercase alphabetic characters in password uquca = number of uppercase alphabetic characters uqd = number of unique digits uqsp = number of unique special characters (anything with a key on the keyboard) uqxc = number of unique special special characters (alt codes, extended-ascii stuff) // algorithm parameters, total sizes of alphabet spaces Nlca = total possible number of lowercase letters (26) Nuca = total uppercase letters (26) Nd = total digits (10) Nsp = total special characters (32 or something) Nxc = total extended ascii characters that dont fit into other categorys (idk, 50?) // algorithm parameters, pw strength growth rates as percentages (per character) flca = entropy growth factor for lowercase letters (.25 is probably a good value) fuca = EGF for uppercase letters (.4 is probably good) fd = EGF for digits (.4 is probably good) fsp = EGF for special chars (.5 is probably good) fxc = EGF for extended ascii chars (.75 is probably good) // repetition factors. few unique letters == low factor, many unique == high rflca = (1 - (1 - flca) ^ uqlca) rfuca = (1 - (1 - fuca) ^ uquca) rfd = (1 - (1 - fd ) ^ uqd ) rfsp = (1 - (1 - fsp ) ^ uqsp ) rfxc = (1 - (1 - fxc ) ^ uqxc ) // digit strengths strength = ( rflca * Nlca + rfuca * Nuca + rfd * Nd + rfsp * Nsp + rfxc * Nxc ) ^ length entropybits = log_base_2(strength) A few inputs and their desired and actual entropy_bits outputs: INPUT DESIRED ACTUAL aaa very pathetic 8.1 aaaaaaaaa pathetic 24.7 abcdefghi weak 31.2 H0ley$Mol3y_ strong 72.2 s^fU¬5ü;y34G< wtf 88.9 [a^36]* pathetic 97.2 [a^20]A[a^15]* strong 146.8 xkcd1** medium 79.3 xkcd2** wtf 160.5 * these 2 passwords use shortened notation, where [a^N] expands to N a's. ** xkcd1 = "Tr0ub4dor&3", xkcd2 = "correct horse battery staple" The algorithm does realize (correctly) that increasing the alphabet size (even by one digit) vastly strengthens long passwords, as shown by the difference in entropy_bits for the 6th and 7th passwords, which both consist of 36 a's, but the second's 21st a is capitalized. However, they do not account for the fact that having a password of 36 a's is not a good idea, it's easily broken with a weak password cracker (and anyone who watches you type it will see it) and the algorithm doesn't reflect that. It does, however, reflect the fact that xkcd1 is a weak password compared to xkcd2, despite having greater complexity density (is this even a thing?). How can I improve this algorithm? Addendum 1 Dictionary attacks and pattern based attacks seem to be the big thing, so I'll take a stab at addressing those. I could perform a comprehensive search through the password for words from a word list and replace words with tokens unique to the words they represent. Word-tokens would then be treated as characters and have their own weight system, and would add their own weights to the password. I'd need a few new algorithm parameters (I'll call them lw, Nw ~= 2^11, fw ~= .5, and rfw) and I'd factor the weight into the password as I would any of the other weights. This word search could be specially modified to match both lowercase and uppercase letters as well as common character substitutions, like that of E with 3. If I didn't add extra weight to such matched words, the algorithm would underestimate their strength by a bit or two per word, which is OK. Otherwise, a general rule would be, for each non-perfect character match, give the word a bonus bit. I could then perform simple pattern checks, such as searches for runs of repeated characters and derivative tests (take the difference between each character), which would identify patterns such as 'aaaaa' and '12345', and replace each detected pattern with a pattern token, unique to the pattern and length. The algorithmic parameters (specifically, entropy per pattern) could be generated on the fly based on the pattern. At this point, I'd take the length of the password. Each word token and pattern token would count as one character; each token would replace the characters they symbolically represented. I made up some sort of pattern notation, but it includes the pattern length l, the pattern order o, and the base element b. This information could be used to compute some arbitrary weight for each pattern. I'd do something better in actual code. Modified Example: Password: 1234kitty$$$$$herpderp Tokenized: 1 2 3 4 k i t t y $ $ $ $ $ h e r p d e r p Words Filtered: 1 2 3 4 @W5783 $ $ $ $ $ @W9001 @W9002 Patterns Filtered: @P[l=4,o=1,b='1'] @W5783 @P[l=5,o=0,b='$'] @W9001 @W9002 Breakdown: 3 small, unique words and 2 patterns Entropy: about 45 bits, as per modified algorithm Password: correcthorsebatterystaple Tokenized: c o r r e c t h o r s e b a t t e r y s t a p l e Words Filtered: @W6783 @W7923 @W1535 @W2285 Breakdown: 4 small, unique words and no patterns Entropy: 43 bits, as per modified algorithm The exact semantics of how entropy is calculated from patterns is up for discussion. I was thinking something like: entropy(b) * l * (o + 1) // o will be either zero or one The modified algorithm would find flaws with and reduce the strength of each password in the original table, with the exception of s^fU¬5ü;y34G<, which contains no words or patterns.

    Read the article

  • Using XA Transactions in Coherence-based Applications

    - by jpurdy
    While the costs of XA transactions are well known (e.g. increased data contention, higher latency, significant disk I/O for logging, availability challenges, etc.), in many cases they are the most attractive option for coordinating logical transactions across multiple resources. There are a few common approaches when integrating Coherence into applications via the use of an application server's transaction manager: Use of Coherence as a read-only cache, applying transactions to the underlying database (or any system of record) instead of the cache. Use of TransactionMap interface via the included resource adapter. Use of the new ACID transaction framework, introduced in Coherence 3.6.   Each of these may have significant drawbacks for certain workloads. Using Coherence as a read-only cache is the simplest option. In this approach, the application is responsible for managing both the database and the cache (either within the business logic or via application server hooks). This approach also tends to provide limited benefit for many workloads, particularly those workloads that either have queries (given the complexity of maintaining a fully cached data set in Coherence) or are not read-heavy (where the cost of managing the cache may outweigh the benefits of reading from it). All updates are made synchronously to the database, leaving it as both a source of latency as well as a potential bottleneck. This approach also prevents addressing "hot data" problems (when certain objects are updated by many concurrent transactions) since most database servers offer no facilities for explicitly controlling concurrent updates. Finally, this option tends to be a better fit for key-based access (rather than filter-based access such as queries) since this makes it easier to aggressively invalidate cache entries without worrying about when they will be reloaded. The advantage of this approach is that it allows strong data consistency as long as optimistic concurrency control is used to ensure that database updates are applied correctly regardless of whether the cache contains stale (or even dirty) data. Another benefit of this approach is that it avoids the limitations of Coherence's write-through caching implementation. TransactionMap is generally used when Coherence acts as system of record. TransactionMap is not generally compatible with write-through caching, so it will usually be either used to manage a standalone cache or when the cache is backed by a database via write-behind caching. TransactionMap has some restrictions that may limit its utility, the most significant being: The lock-based concurrency model is relatively inefficient and may introduce significant latency and contention. As an example, in a typical configuration, a transaction that updates 20 cache entries will require roughly 40ms just for lock management (assuming all locks are granted immediately, and excluding validation and writing which will require a similar amount of time). This may be partially mitigated by denormalizing (e.g. combining a parent object and its set of child objects into a single cache entry), at the cost of increasing false contention (e.g. transactions will conflict even when updating different child objects). If the client (application server JVM) fails during the commit phase, locks will be released immediately, and the transaction may be partially committed. In practice, this is usually not as bad as it may sound since the commit phase is usually very short (all locks having been previously acquired). Note that this vulnerability does not exist when a single NamedCache is used and all updates are confined to a single partition (generally implying the use of partition affinity). The unconventional TransactionMap API is cumbersome but manageable. Only a few methods are transactional, primarily get(), put() and remove(). The ACID transactions framework (accessed via the Connection class) provides atomicity guarantees by implementing the NamedCache interface, maintaining its own cache data and transaction logs inside a set of private partitioned caches. This feature may be used as either a local transactional resource or as logging XA resource. However, a lack of database integration precludes the use of this functionality for most applications. A side effect of this is that this feature has not seen significant adoption, meaning that any use of this is subject to the usual headaches associated with being an early adopter (greater chance of bugs and greater risk of hitting an unoptimized code path). As a result, for the moment, we generally recommend against using this feature. In summary, it is possible to use Coherence in XA-oriented applications, and several customers are doing this successfully, but it is not a core usage model for the product, so care should be taken before committing to this path. For most applications, the most robust solution is normally to use Coherence as a read-only cache of the underlying data resources, even if this prevents taking advantage of certain product features.

    Read the article

  • Can't control connection bit rate using iwconfig with Atheros TL-WN821N (AR7010)

    - by Paul H
    I'm trying to reduce the connection bit rate on my Atheros TP-Link TL-WN821N v3 usb wifi adapter due to frequent instability issues (reported connection speed goes down to 1Mb/s and I have to physically reconnect the adapter to regain a connection). I know this is a common problem with this device, and I have tried everything I can think of to fix it, including using drivers from linux-backports; compiling and installing a custom firmware (following instructions on https://wiki.debian.org/ath9k_htc#fw-free) and (as a last resort) using ndiswrapper. When using ndiswrapper, the wifi adapter is stable and operates in g mode at 54Mb/s (whilst when using the default ath9k_htc module, the adapter connects in n mode and the bit rate fluctuates constantly). Unfortunately, with this setup I have to run my processor using only one core, since using SMP with ndiswrapper causes a kernel oops on my system. So I want to lock my bit rate to 54Mb/s (or less, if need be) for connection stability, using the ath9k_htc module. I've tried 'sudo iwconfig wlan0 rate 54M'; the command runs with no error but when I check the bit rate with 'sudo iwlist wlan0 bitrate' the command returns: wlan0 unknown bit-rate information. Current Bit Rate:78 Mb/s Any ideas? Here's some info (hopefully relevant) on my setup: Xubuntu (12.04.3) 64bit (kernel 3.2.0-55.85-generic) using Network Manager. My Router is from Virgin Media, the VMDG480. lshw -C network : *-network description: Wireless interface physical id: 1 bus info: usb@1:4 logical name: wlan0 serial: 74:ea:3a:8f:16:b6 capabilities: ethernet physical wireless configuration: broadcast=yes driver=ath9k_htc driverversion=3.2.0-55 firmware=1.3 ip=192.168.0.9 link=yes multicast=yes wireless=IEEE 802.11bgn lsusb -v: Bus 001 Device 003: ID 0cf3:7015 Atheros Communications, Inc. TP-Link TL-WN821N v3 802.11n [Atheros AR7010+AR9287] Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 255 Vendor Specific Class bDeviceSubClass 255 Vendor Specific Subclass bDeviceProtocol 255 Vendor Specific Protocol bMaxPacketSize0 64 idVendor 0x0cf3 Atheros Communications, Inc. idProduct 0x7015 TP-Link TL-WN821N v3 802.11n [Atheros AR7010+AR9287] bcdDevice 2.02 iManufacturer 16 ATHEROS iProduct 32 UB95 iSerial 48 12345 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 60 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0x80 (Bus Powered) MaxPower 500mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 6 bInterfaceClass 255 Vendor Specific Class bInterfaceSubClass 0 bInterfaceProtocol 0 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x01 EP 1 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x83 EP 3 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0040 1x 64 bytes bInterval 1 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x04 EP 4 OUT bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0040 1x 64 bytes bInterval 1 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x05 EP 5 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x06 EP 6 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 255 Vendor Specific Class bDeviceSubClass 255 Vendor Specific Subclass bDeviceProtocol 255 Vendor Specific Protocol bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0000 (Bus Powered) iwlist wlan0 scanning: wlan0 Scan completed : Cell 01 - Address: C4:3D:C7:3A:1F:5D Channel:1 Frequency:2.412 GHz (Channel 1) Quality=37/70 Signal level=-73 dBm Encryption key:on ESSID:"my essid" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 18 Mb/s 24 Mb/s; 36 Mb/s; 54 Mb/s Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 48 Mb/s Mode:Master Extra:tsf=00000070cca77186 Extra: Last beacon: 5588ms ago IE: Unknown: 0007756E69636F726E IE: Unknown: 010882848B962430486C IE: Unknown: 030101 IE: Unknown: 2A0100 IE: Unknown: 2F0100 IE: IEEE 802.11i/WPA2 Version 1 Group Cipher : TKIP Pairwise Ciphers (2) : CCMP TKIP Authentication Suites (1) : PSK IE: Unknown: 32040C121860 IE: Unknown: 2D1AFC181BFFFF000000000000000000000000000000000000000000 IE: Unknown: 3D1601080400000000000000000000000000000000000000 IE: Unknown: DD7E0050F204104A0001101044000102103B00010310470010F99C335D7BAC57FB00137DFA79600220102100074E657467656172102300074E6574676561721024000631323334353610420007303030303030311054000800060050F20400011011000743473331303144100800022008103C0001011049000600372A000120 IE: Unknown: DD090010180203F02C0000 IE: WPA Version 1 Group Cipher : TKIP Pairwise Ciphers (2) : CCMP TKIP Authentication Suites (1) : PSK IE: Unknown: DD180050F2020101800003A4000027A4000042435E0062322F00 iwconfig: lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"my essid" Mode:Managed Frequency:2.412 GHz Access Point: C4:3D:C7:3A:1F:5D Bit Rate=78 Mb/s Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=36/70 Signal level=-74 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0,

    Read the article

  • Toolbar items in sub-nib

    - by roe
    This question has probably been asked before, but my google-fu must be inferior to everybody else's, cause I can't figure this out. I'm playing around with the iPhone SDK, and I'm building a concept app I've been thinking about. If we have a look at the skeleton generated with a navigation based app, the MainWindow.xib contains a navigation controller, and within that a root-view controller (and a navigation bar and toolbar if you play around with it a little). The root-view controller has the RootViewController-nib associated with it, which loads the table-view. So far so good. To add content to the tool bar and to the navigation bar, I'm supposed to add those to in the hierarchy below the Root View Controller (which works, no problem). However, what I can't figure out is, this is all still within the MainWindow.xib (or, at runtime, nib). How would I define a xib in order for it to pick up tool bar items from that? I want to do (the equivalent of, just reusing the name here) RootViewController *controller = [[RootViewController alloc] initWithNibName:nil bundle:nil]; [self.navigationController pushViewController:controller animated:YES]; [controller release]; and have the navigation controller pick-up on the tool bar items defined in that nib. The logical place to put it would be in the hierarchy under File's Owner (which is of type RootViewController), but it doesn't appear to be possible. Currently, I'm assigning these (navigationItem and toolbarItems) manually in the viewDidLoad method, or define them in the MainWindow.xib directly to be loaded when the app initializes. Any ideas? Edit I guess I'll try to explain with a picture. This is the Interface Builder of the main window, pretty much as it comes out of the wizard to create a navigation based project. I've added a toolbar item for clarity though. You can see the navigation controller, with a toolbar and a navigation bar, and the root view controller. Basically, the Root View Controller has a bar button item and a navigation item as you can see. The thing is, it's also got a nib associated with it, which, when loaded will instantiate a view, and assign it to the view outlet of the controller (which in that nib is File's Owner, of type RootViewController, as should be). How can I get the toolbar item, and the navigation item, into the other nib, the RootViewController.nib so I can remove them here. The RootViewController.nib adds everything else to the Root View Controller, why not these items? The background for this is that I want to simply instantiate RootViewController, initialize it with its own nib (i.e. initWithNibName:nil shown above), and push it onto the navigation controller, without having to add the navigation/toolbar items in coding (as I do it now).

    Read the article

  • Rails - embedded polymorphic comment list + add comment form - example?

    - by odigity
    Hey, all. Working on my first Rails app. I've searched all around - read a bunch of tutorials, articles, and forum posts, watched some screencasts, and I've found a few examples that come close to what I'm trying to do (notably http://railscasts.com/episodes/154-polymorphic-association and ep 196 about nested model forms), but not exactly. I have two models (Podcast and BlogPost) that need to be commentable, and I have a Comment model that is polymorphically related to both. The railscasts above had a very similar example (ep 154), but Ryan used a full set of nested routes, so there were specific templates for adding and editing comments. What I want to do is show the list of comments right on the Podcast or BlogPost page, along with an Add Comment form at the bottom. I don't need a separate add template/route, and I don't need the ability to edit, only delete. This is a pretty common design on the web, but I can't find a Rails example specifically about this pattern. Here's my current understanding: I need routes for the create and delete actions, of course, but there are no templates associated with those. I'm also guessing that the right approach is to create a partial that can be included at the bottom of both the Podcast and BlogPost show template. The logical name for the partial seems to me to be something like _comments.html.haml. I know it's a common convention to have the object passed to the partial be named after the template, but calling the object 'comments' seems to not match my use case, since what I really need to pass is the commentable object (Podcast or BlogPost). So, I guess I'd use the locals option for the render partial call? (:commentable = @podcast). Inside the partial, I could call commentable.comments to get the comments collection, render that with a second partial (this time with the conventional use case, calling the partial _comment.html.haml), then create a form that submits to... what? REST-wise, it should be a POST to the collection, which would be /podcast|blogpost/:id/comments, and I think the helper for that is podcast_comments_path(podcast) if it were a podcast - not sure what to do though, since I'm using polymorphic comments. That would trigger the Comment.create action, which would then need to redirect back to the podcast|blogpost path /podcast|blogpost/:id. It's all a bit overwhelming, which is why I was really hoping to find a screencast or example that specifically implements this design.

    Read the article

  • Segmentation fault when creating a Phonon MediaObject

    - by Luke Hansford
    I have music playing program made using PySide which uses Phonon to playback audio. I updated to MacOS X Mavericks a few days ago, which meant I needed to reinstall PySide. I'm not sure which of these actions has caused this, but now whenever I try to create a Phonon MediaObject I get a Segmentation Fault: 11 from Python. It's not just in my program, it happens when trying to create a MediaObject in Python without any other actions. I'm getting the following error message from my Mac whenever it crashes: Process: Python [13711] Path: /usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python Identifier: org.python.python Version: 2.7.5 (2.7.5) Code Type: X86-64 (Native) Parent Process: bash [13707] Responsible: Terminal [13704] User ID: 501 Date/Time: 2013-11-01 19:47:53.164 +1000 OS Version: Mac OS X 10.9 (13A603) Report Version: 11 Anonymous UUID: C2686854-18CA-9D37-26E9-60050E3C4DA6 Sleep/Wake UUID: BB983BF6-CCE2-44D1-82A0-1C73382DFFE4 Crashed Thread: 0 Dispatch queue: com.apple.main-thread Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000008 VM Regions Near 0x8: --> __TEXT 00000001082e8000-00000001082e9000 [ 4K] r-x/rwx SM=COW /usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python Thread 0 Crashed:: Dispatch queue: com.apple.main-thread 0 QtCore 0x000000010a1b34cb QObject::moveToThread(QThread*) + 17 1 QtDBus 0x000000010d55f98b QDBusDefaultConnection::QDBusDefaultConnection(QDBusConnection::BusType, char const*) + 171 2 QtDBus 0x000000010d55ebdf QDBusConnection::sessionBus() + 71 3 phonon 0x000000010d50228d Phonon::FactoryPrivate::FactoryPrivate() + 189 4 phonon 0x000000010d5024d5 Phonon::$_249::operator->() + 99 5 phonon 0x000000010d502991 Phonon::Factory::registerFrontendObject(Phonon::MediaNodePrivate*) + 17 6 phonon 0x000000010d50b27e Phonon::MediaNodePrivate::MediaNodePrivate(Phonon::MediaNodePrivate::CastId) + 80 7 phonon 0x000000010d50f570 Phonon::MediaObjectPrivate::MediaObjectPrivate() + 24 8 phonon 0x000000010d50be9d Phonon::MediaObject::MediaObject(QObject*) + 45 9 phonon.so 0x000000010d42f24a Sbk_Phonon_MediaObject_Init + 458 10 org.python.python 0x0000000108338707 type_call + 189 11 org.python.python 0x00000001082f74fd PyObject_Call + 101 12 org.python.python 0x00000001083714f0 PyEval_EvalFrameEx + 15525 13 org.python.python 0x0000000108373aaf fast_function + 182 14 org.python.python 0x0000000108370919 PyEval_EvalFrameEx + 12494 15 org.python.python 0x000000010836d721 PyEval_EvalCodeEx + 1638 16 org.python.python 0x000000010836d0b5 PyEval_EvalCode + 54 17 org.python.python 0x000000010838beb8 run_mod + 53 18 org.python.python 0x000000010838bf5f PyRun_FileExFlags + 137 19 org.python.python 0x000000010838baad PyRun_SimpleFileExFlags + 718 20 org.python.python 0x000000010839c58b Py_Main + 3039 21 libdyld.dylib 0x00007fff8e4fb5fd start + 1 Thread 1:: Dispatch queue: com.apple.libdispatch-manager 0 libsystem_kernel.dylib 0x00007fff8c938662 kevent64 + 10 1 libdispatch.dylib 0x00007fff923e743d _dispatch_mgr_invoke + 239 2 libdispatch.dylib 0x00007fff923e7152 _dispatch_mgr_thread + 52 Thread 2: 0 libsystem_kernel.dylib 0x00007fff8c937e6a __workq_kernreturn + 10 1 libsystem_pthread.dylib 0x00007fff90bd8f08 _pthread_wqthread + 330 2 libsystem_pthread.dylib 0x00007fff90bdbfb9 start_wqthread + 13 Thread 3: 0 libsystem_kernel.dylib 0x00007fff8c937e6a __workq_kernreturn + 10 1 libsystem_pthread.dylib 0x00007fff90bd8f08 _pthread_wqthread + 330 2 libsystem_pthread.dylib 0x00007fff90bdbfb9 start_wqthread + 13 Thread 4: 0 libsystem_kernel.dylib 0x00007fff8c937e6a __workq_kernreturn + 10 1 libsystem_pthread.dylib 0x00007fff90bd8f08 _pthread_wqthread + 330 2 libsystem_pthread.dylib 0x00007fff90bdbfb9 start_wqthread + 13 Thread 0 crashed with X86 Thread State (64-bit): rax: 0x00007feba0d19700 rbx: 0x000000010d5b7098 rcx: 0x00000000002f4180 rdx: 0x000000000012c040 rdi: 0x0000000000000000 rsi: 0x00007feba0d19700 rbp: 0x00007fff57917210 rsp: 0x00007fff579171d0 r8: 0x00007feba0fd5d10 r9: 0x00007feba0ff5310 r10: 0x0000000019c04cbe r11: 0x0000000070769b38 r12: 0x00007fff57917220 r13: 0x00007feba0c07190 r14: 0x0000000000000000 r15: 0x00007feba0fe1430 rip: 0x000000010a1b34cb rfl: 0x0000000000010202 cr2: 0x0000000000000008 Logical CPU: 0 Error Code: 0x00000004 Trap Number: 14 Anyone have any ideas about what is happening?

    Read the article

  • Using QT to build a WYSIWYG Editor for a Custom Markup Language

    - by Aaron
    I'm new to QT, and am trying to figure out the best means of creating a WYSIWYG editor widget for a custom markup language that displays simple text, images, and links. I need to be able to propagate changes from the WYSIWYG editor to the custom markup representation. As a concrete example of the problem domain, imagine that the custom markup might have a "player" tag which contains a player name and a team name. The markup could look like this: Last week, <player id="1234"><name>Aaron Rodgers</name><team>Packers</team></player> threw a pass. This text would display in the editor as: Last week, Aaron Rodgers of the Packers threw a pass. The player name and the team name would be editable directly within the editor in standard WYSIWYG fashion, so that my users do not have to learn any markup. Also, when the player name is moused-over, a details pop-up will appear about that player, and similarly for the team. With that long introduction, I'm trying to figure out where to start with QT. It seems that the most logical option would be the Rich Text API using a QTextDocument. This approach seems less than ideal given the limitations of a QTextDocument: I can't figure out how to capture navigation events from clicking on links. Following links on click seems to only be enabled when the QTextEdit is readonly. Custom objects that implement QTextObjectInterface are ignored in copy-and-paste operations Any HTML-based markup that is passed to it as Rich Text is retranslated into a series of span tags and lots of other junk, making it extremely difficult to propagate changes from the editor back to the original custom markup. A second option appears to be QWebKit, which allows for live editing of HTML5 markup, so I could specify a two-way translation between the custom markup and HTML5. I'm not clear on how one would propagate changes from the editor back to the original markup in real-time without re-translating the entire document on every text change. The QWebKit solutions looks like awfully bulky to me (Learning WebKit along with QT) to what should be a relatively simple problem. I have also considered implementing the WYSIWYG with a custom class using native QT containers, labels, images, and other widgets manually. This seems like the most flexible approach, and the one most likely not to run into unresolvable problems. However, I'm pretty sure that implementing all the details of a normal text editor (selecting text, font changes, cut-and-paste support, undo/redo, dragging of objects, cursor placement, etc.) will be incredibly time consuming. So, finally, my question: are there any QT gurus out there with some advice on where to start with this sort of project? BTW, I am using QT because the application is a desktop application that needs platform independence.

    Read the article

  • When is it worth using a BindingSource?

    - by Justin
    I think I understand well enough what the BindingSource class does - i.e. provide a layer of indirection between a data source and a UI control. It implements the IBindingList interface and therefore also provides support for sorting. And I've used it frequently enough, without too many problems. But I'm wondering if I use it more often than I should. Perhaps an example would help. Let's say I have just a simple textbox on a form (using WinForms), and I'd like to bind that textbox to a simple property inside a class that returns a string. Is it worth using a BindingSource in this situation? Now let's say I have a grid on my form, and I'd like to bind it to a DataTable. Should I use a BindingSource now? In the latter case, I probably would not use a BindingSource, as a DataTable, from what I can gather, provides the same functionality that the BindingSource itself would. The DataTable will fire the the right events when a row is added, deleted, etc so that the grid will automatically update. But in the first case with the textbox being bound to a string, I would probably have the class that contains the string property implement INotifyPropertyChanged, so that it could fire the PropertyChanged event when the string changes. I would use a BindingSource so that it could listen to these PropertyChanged events so that it could update the textbox automatically when the string changes. How does this sound so far? I still feel like there's a gap in my understanding that's preventing me from seeing the whole picture. This has been a pretty vague question so far, so I'll try to ask some more specific questions - ideally the answers will reference the above examples or something similar... (1) Is it worth using a BindingSource in either of the above examples? (2) It seems that developers just "assume" that the DataTable class will do the right thing, in firing PropertyChanged events at the right time. How does one know if a data source is capable of doing this? Is there a particular interface that a data source should implement in order for developers to be able to assume this behaviour? (3) Does it matter what Control is being bound to, when considering whether or not to use a BindingSource? Or is it only the data source that should affect the decision? Perhaps the answer is (and this would seem logical enough): the Control needs to be intelligent enough to listen to the PropertyChanged events, otherwise a BindingSource is required. So how does one tell if the Control is capable of doing this? Again, is there a particular interface that developers can look for that the Control must implement? It is this confusion that has, in the past, led to me always using a BindingSource. But I'd like to understand better exactly when to use one, so that I do so only when necessary.

    Read the article

  • EntityManagerFactory error on Websphere

    - by neverland
    I have a very weird problem. Have an application using Hibernate and spring.I have an entitymanger defined which uses a JNDI lookup .It looks something like this <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="persistenceUnitName" value="ConfigAPPPersist" /> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"> <property name="showSql" value="true" /> <property name="generateDdl" value="false" /> <property name="databasePlatform" value="org.hibernate.dialect.Oracle9Dialect" /> </bean> </property> </bean> <bean id="dataSource" class="org.springframework.jdbc.datasource.WebSphereDataSourceAdapter"> <property name="targetDataSource"> <bean class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiName" value="jdbc/pmp" /> </bean> </property> </bean> This application runs fine in DEV. But when we move to higher envs the team that deploys this application does it successfully initially but after a few restarts of the application the entitymanager starts giving this problem Caused by: javax.persistence.PersistenceException: [PersistenceUnit: ConfigAPPPersist] Unable to build EntityManagerFactory at org.hibernate.ejb.Ejb3Configuration.buildEntityManagerFactory(Ejb3Configuration.java:677) at org.hibernate.ejb.HibernatePersistence.createContainerEntityManagerFactory(HibernatePersistence.java:132) at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.createNativeEntityManagerFactory(LocalContainerEntityManagerFactoryBean.java:224) at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.afterPropertiesSet(AbstractEntityManagerFactoryBean.java:291) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1368) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1334) ... 32 more Caused by: org.hibernate.MappingException: **property mapping has wrong number of columns**: com.***.***.jpa.marketing.entity.MarketBrands.$performasure_j2eeInfo type: object Now you would say this is pretty obvious the entity MarketBrands is incorrect. But its not it maps to the table just fine. And the same code works on DEV. Also the jndi cannot be incorrect since it deploys and works fine initially but throws uo this error after a restart. This is weird and not very logical. But if someone has faced this or has any idea on what might be causing this Please!! help The persistence.xml for the persitence unit has very little <?xml version="1.0" encoding="UTF-8"?> <persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd" version="1.0"> <persistence-unit name="ConfigAPPPersist"> <!-- commented code --> </persistence-unit> </persistence>

    Read the article

  • NSDocument Subclass not closed by NSWindowController?

    - by Nathan Douglas
    Okay, I'm fairly new to Cocoa and Objective-C, and to OOP in general. As background, I'm working on an extensible editor that stores the user's documents in a package. This of course required some "fun" to get around some issues with NSFileWrapper (i.e. a somewhat sneaky writing and loading process to avoid making NSFileWrappers for every single document within the bundle). The solution I arrived at was to essentially treat my NSDocument subclass as just a shell -- use it to make the folder for the bundle, and then pass off writing the actual content of the document to other methods. Unfortunately, at some point I seem to have completely screwed the pooch. I don't know how this happened, but closing the document window no longer releases the document. The document object doesn't seem to receive a "close" message -- or any related messages -- even though the window closes successfully. The end result is that if I start my app, create a new document, save it, then close it, and try to reopen it, the document window never appears. With some creative subclassing and NSLogging, I managed to figure out that the document object was still in memory, and still attached to the NSDocumentController instance, and so trying to open the document never got past the NSDocumentController's "hmm, currently have that one open" check. I did have an NSWindowController and NSDocumentController instance, but I've purged them from my project completely. I've overridden nearly every method for NSDocument trying to find out where the issue is. So far as I know, my Interface Builder bindings are all correct -- "Close" in the main menu is attached to "performClose:" of the First Responder, etc, and I've tried with fresh unsullied MainMenu and Document xibs as well. I thought that it might be something strange with my bundle writing code, so I basically deleted it all and started from scratch, but that didn't seem to work. I took out my init method overrides, and that didn't help either. I don't have the source of any simple document apps here, so I didn't try the next logical step (to substitute known-working code for mine in the readfromurl and writetourl methods). I've had this problem for about sixteen hours of uninterrupted troubleshooting now, and needless to say, I'm at the end of my rope. If I can't figure it out, I guess I'm going to try the project from scratch with a lot more code and intensity based around the bundle-document mess. Any help would be greatly appreciated.

    Read the article

  • Segmentation fault in std function std::_Rb_tree_rebalance_for_erase ()

    - by Sarah
    I'm somewhat new to programming and am unsure how to deal with a segmentation fault that appears to be coming from a std function. I hope I'm doing something stupid (i.e., misusing a container), because I have no idea how to fix it. The precise error is Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_INVALID_ADDRESS at address: 0x000000000000000c 0x00007fff8062b144 in std::_Rb_tree_rebalance_for_erase () (gdb) backtrace #0 0x00007fff8062b144 in std::_Rb_tree_rebalance_for_erase () #1 0x000000010000e593 in Simulation::runEpidSim (this=0x7fff5fbfcb20) at stl_tree.h:1263 #2 0x0000000100016078 in main () at main.cpp:43 The function that exits successfully just before the segmentation fault updates the contents of two containers. One is a boost::unordered_multimap called carriage; it contains one or more struct Infection objects that contain two doubles. The other container is of type std::multiset< Event, std::less< Event EventPQ called ce. It is full of Event structs. void Host::recover( int s, double recoverTime, EventPQ & ce ) { // Clearing all serotypes in carriage // and their associated recovery events in ce // and then updating susceptibility to each serotype double oldRecTime; int z; for ( InfectionMap::iterator itr = carriage.begin(); itr != carriage.end(); itr++ ) { z = itr->first; oldRecTime = (itr->second).recT; EventPQ::iterator epqItr = ce.find( Event(oldRecTime) ); assert( epqItr != ce.end() ); ce.erase( epqItr ); immune[ z ]++; } carriage.clear(); calcSusc(); // a function that edits an array cout << "Done with sync_recovery event." << endl; } The last cout << line appears immediately before the seg fault. I hope this is enough (but not too much) information. My idea so far is that the rebalancing is being attempting on ce after this function, but I am unsure why it would be failing. (It's unfortunately very hard for me to test this code by removing particular lines, since they would create logical inconsistencies and further problems, but if experienced programmers still think this is the way to go, I'll try.)

    Read the article

  • Oracle Schema Design: Seperate Schema with I/O Overhead?

    - by Guru
    We are designing database schema for a new system based on Oracle 11gR1. We have identified a main schema which would have close to 100 tables, these will be accessed from the front end Java application. We have a requirement to audit the values which got changed in close to 50 tables, this has to be done every row. Which means, it is possible that, for a single row in MYSYS.T1 there might be 50 (or more) rows in MYSYS_AUDIT.T1_AUD table. We might be having old values of every column entry and new values available from T1. DBA gave an observation, advising against this method, because he said, separate schema meant an extra I/O for every operation. Basically AUDIT schema would be used only to do some analyse and enter values (thus SELECT and INSERT). Is it true that, "a separate schema means an extra I/O" ? I could not find justification. It appears logical to me, as the AUDIT data should not be tampered with, so a separate schema. Also, we designed a separate schema for archiving some tables from MYSYS. From MYSYS_ARC the table might be backed up into tapes or deleted after sufficient time. Few stats: Few tables (close to 20, 30) in MYSYS schema could grow to around 50M rows. We have asked for a total disk space of 4 TB. MYSYS_AUDIT schema might be having 10 times that of MYSYS but we wont keep them more than 3 months. Questions Given all these, can you suggest me any improvements? Separate schema affects disc I/O? (one extra I/O for every schema ?) Any general suggestions? Figure: +-------------------+ +-------------------+ | MYSYS | | MYSYS_AUDIT | | | | | | 1. T1 | | 1. T1_AUD | | 2. T2 | | 2. T2_AUD | | 3. T3 |--------->| 3. T3_AUD | | 4. T4 |(SELECT, | 4. T4_AUD | | . | INSERT) | . | | . | | . | | . | | . | | 100. T100 | | 50. T50_AUD | +-------------------+ +-------------------+ | | | | |(INSERT) | | | * +-------------------+ | MYSYS_ARC | | | | 1. T1_ARC | | 2. T2_ARC | | 3. T3_ARC | | 4. T4_ARC | | . | | . | | . | | 100. T100_ARC | +-------------------+ Apart from this, we have two more schemas with only read only rights, but mainly they are for adhoc purpose and we dont mind the performance on them.

    Read the article

  • Constraint Satisfaction Problem

    - by Carl Smotricz
    I'm struggling my way through Artificial Intelligence: A Modern Approach in order to alleviate my natural stupidity. In trying to solve some of the exercises, I've come up against the "Who Owns the Zebra" problem, Exercise 5.13 in Chapter 5. This has been a topic here on SO but the responses mostly addressed the question "how would you solve this if you had a free choice of problem solving software available?" I accept that Prolog is a very appropriate programming language for this kind of problem, and there are some fine packages available, e.g. in Python as shown by the top-ranked answer and also standalone. Alas, none of this is helping me "tough it out" in a way as outlined by the book. The book appears to suggest building a set of dual or perhaps global constraints, and then implementing some of the algorithms mentioned to find a solution. I'm having a lot of trouble coming up with a set of constraints suitable for modelling the problem. I'm studying this on my own so I don't have access to a professor or TA to get me over the hump - this is where I'm asking for your help. I see little similarity to the examples in the chapter. I was eager to build dual constraints and started out by creating (the logical equivalent of) 25 variables: nationality1, nationality2, nationality3, ... nationality5, pet1, pet2, pet3, ... pet5, drink1 ... drink5 and so on, where the number was indicative of the house's position. This is fine for building the unary constraints, e.g. The Norwegian lives in the first house: nationality1 = { :norway }. But most of the constraints are a combination of two such variables through a common house number, e.g. The Swede has a dog: nationality[n] = { :sweden } AND pet[n] = { :dog } where n can range from 1 to 5, obviously. Or stated another way: nationality1 = { :sweden } AND pet1 = { :dog } XOR nationality2 = { :sweden } AND pet2 = { :dog } XOR nationality3 = { :sweden } AND pet3 = { :dog } XOR nationality4 = { :sweden } AND pet4 = { :dog } XOR nationality5 = { :sweden } AND pet5 = { :dog } ...which has a decidedly different feel to it than the "list of tuples" advocated by the book: ( X1, X2, X3 = { val1, val2, val3 }, { val4, val5, val6 }, ... ) I'm not looking for a solution per se; I'm looking for a start on how to model this problem in a way that's compatible with the book's approach. Any help appreciated.

    Read the article

  • Stack and queue operations on the same array.

    - by Passonate Learner
    Hi. I've been thinking about a program logic, but I cannot draw a conclusion to my problem. Here, I've implemented stack and queue operations to a fixed array. int A[1000]; int size=1000; int top; int front; int rear; bool StackIsEmpty() { return (top==0); } bool StackPush( int x ) { if ( top >= size ) return false; A[top++] = x; return true; } int StackTop( ) { return A[top-1]; } bool StackPop() { if ( top <= 0 ) return false; A[--top] = 0; return true; } bool QueueIsEmpty() { return (front==rear); } bool QueuePush( int x ) { if ( rear >= size ) return false; A[rear++] = x; return true; } int QueueFront( ) { return A[front]; } bool QueuePop() { if ( front >= rear ) return false; A[front++] = 0; return true; } It is presumed(or obvious) that the bottom of the stack and the front of the queue is pointing at the same location, and vice versa(top of the stack points the same location as rear of the queue). For example, integer 1 and 2 is inside an array in order of writing. And if I call StackPop(), the integer 2 will be popped out, and if I call QueuePop(), the integer 1 will be popped out. My problem is that I don't know what happens if I do both stack and queue operations on the same array. The example above is easy to work out, because there are only two values involved. But what if there are more than 2 values involved? For example, if I call StackPush(1); QueuePush(2); QueuePush(4); StackPop(); StackPush(5); QueuePop(); what values will be returned in the order of bottom(front) from the final array? I know that if I code a program, I would receive a quick answer. But the reason I'm asking this is because I want to hear a logical explanations from a human being, not a computer.

    Read the article

  • Boggling Direct3D9 dynamic vertex buffer Lock crash/post-lock failure on Intel GMA X3100.

    - by nj
    Hi, For starters I'm a fairly seasoned graphics programmer but as wel all know, everyone makes mistakes. Unfortunately the codebase is a bit too large to start throwing sensible snippets here and re-creating the whole situation in an isolated CPP/codebase is too tall an order -- for which I am sorry, do not have the time. I'll do my best to explain. B.t.w, I will of course supply specific pieces of code if someone wonders how I'm handling this-or-that! As with all resources in the D3DPOOL_DEFAULT pool, when the device context is taken away from you you'll sooner or later will have to reset your resources. I've built a mechanism to handle this for all relevant resources that's been working for years; but that fact nothingwithstanding I've of course checked, asserted and doubted any assumption since this bug came to light. What happens is as follows: I have a rather large dynamic vertex buffer, exact size 18874368 bytes. This buffer is locked (and discarded fully using the D3DLOCK_DISCARD flag) each frame prior to generating dynamic geometry (isosurface-related, f.y.i) to it. This works fine, until, of course, I start to reset. It might take 1 time, it might take 2 or it might take 5 resets to set off a bug that causes an access violation either on the pointer returned by the Lock() operation on the renewed resource or a plain crash -- regarding a somewhat similar address, but without the offset that it has tacked on to it in the first case because in that case we're somewhere halfway writing -- iside the D3D9 dll Lock() call. I've tested this on other hardware, upgraded my GMA X3100 drivers (using a MacBook with BootCamp) to the latest ones, but I can't reproduce it on any other machine and I'm at a loss about what's wrong here. I have tried to reproduce a similar situation with a similar buffer (I've got a large scratch pad of the same type I filled with quads) and beyond a certain amount of bytes it started to behave likewise. I'm not asking for a solution here but I'm very interested if there are other developers here who have battled with the same foe or maybe some who can point me in some insightful direction, maybe ask some questions that might shed a light on what I may or may not be overlooking. Another interesting artifact is that the vertex buffer starts to bug if I supply both D3DLOCK_DISCARD and D3DLOCK_NOOVERWRITE together which, even though not very logical (you're not going to overwrite if you've just discarded all), gives graphics glitches. Thanks and any corrections are more than welcome. Niels p.s - A friend of mine raised the valid point that it is a huge buffer for onboard video RAM and it's being at least double or triple buffered internally due to it's dynamic nature. On the other hand, the debug output (D3D9 debug DLL + max. warning output) remains silent. p.s 2 - Had it tested on more machines and still works -- it's probably a matter of circumstance: the huge dynamic, internally double/trippled buffered buffer, not a lot of memory and drivers that don't complain when they should.. Unless someone has a better suggestion; I'd still love to hear it :)

    Read the article

  • Restoring multiple database backups in a transaction

    - by Raghu Dodda
    I wrote a stored procedure that restores as set of the database backups. It takes two parameters - a source directory and a restore directory. The procedure looks for all .bak files in the source directory (recursively) and restores all the databases. The stored procedure works as expected, but it has one issue - if I uncomment the try-catch statements, the procedure terminates with the following error: error_number = 3013 error_severity = 16 error_state = 1 error_message = DATABASE is terminating abnormally. The weird part is sometimes (it is not consistent) the restore is done even if the error occurs. The procedure: create proc usp_restore_databases ( @source_directory varchar(1000), @restore_directory varchar(1000) ) as begin declare @number_of_backup_files int -- begin transaction -- begin try -- step 0: Initial validation if(right(@source_directory, 1) <> '\') set @source_directory = @source_directory + '\' if(right(@restore_directory, 1) <> '\') set @restore_directory = @restore_directory + '\' -- step 1: Put all the backup files in the specified directory in a table -- declare @backup_files table ( file_path varchar(1000)) declare @dos_command varchar(1000) set @dos_command = 'dir ' + '"' + @source_directory + '*.bak" /s/b' /* DEBUG */ print @dos_command insert into @backup_files(file_path) exec xp_cmdshell @dos_command delete from @backup_files where file_path IS NULL select @number_of_backup_files = count(1) from @backup_files /* DEBUG */ select * from @backup_files /* DEBUG */ print @number_of_backup_files -- step 2: restore each backup file -- declare backup_file_cursor cursor for select file_path from @backup_files open backup_file_cursor declare @index int; set @index = 0 while(@index < @number_of_backup_files) begin declare @backup_file_path varchar(1000) fetch next from backup_file_cursor into @backup_file_path /* DEBUG */ print @backup_file_path -- step 2a: parse the full backup file name to get the DB file name. declare @db_name varchar(100) set @db_name = right(@backup_file_path, charindex('\', reverse(@backup_file_path)) -1) -- still has the .bak extension /* DEBUG */ print @db_name set @db_name = left(@db_name, charindex('.', @db_name) -1) /* DEBUG */ print @db_name set @db_name = lower(@db_name) /* DEBUG */ print @db_name -- step 2b: find out the logical names of the mdf and ldf files declare @mdf_logical_name varchar(100), @ldf_logical_name varchar(100) declare @backup_file_contents table ( LogicalName nvarchar(128), PhysicalName nvarchar(260), [Type] char(1), FileGroupName nvarchar(128), [Size] numeric(20,0), [MaxSize] numeric(20,0), FileID bigint, CreateLSN numeric(25,0), DropLSN numeric(25,0) NULL, UniqueID uniqueidentifier, ReadOnlyLSN numeric(25,0) NULL, ReadWriteLSN numeric(25,0) NULL, BackupSizeInBytes bigint, SourceBlockSize int, FileGroupID int, LogGroupGUID uniqueidentifier NULL, DifferentialBaseLSN numeric(25,0) NULL, DifferentialBaseGUID uniqueidentifier, IsReadOnly bit, IsPresent bit ) insert into @backup_file_contents exec ('restore filelistonly from disk=' + '''' + @backup_file_path + '''') select @mdf_logical_name = LogicalName from @backup_file_contents where [Type] = 'D' select @ldf_logical_name = LogicalName from @backup_file_contents where [Type] = 'L' /* DEBUG */ print @mdf_logical_name + ', ' + @ldf_logical_name -- step 2c: restore declare @mdf_file_name varchar(1000), @ldf_file_name varchar(1000) set @mdf_file_name = @restore_directory + @db_name + '.mdf' set @ldf_file_name = @restore_directory + @db_name + '.ldf' /* DEBUG */ print 'mdf_logical_name = ' + @mdf_logical_name + '|' + 'ldf_logical_name = ' + @ldf_logical_name + '|' + 'db_name = ' + @db_name + '|' + 'backup_file_path = ' + @backup_file_path + '|' + 'restore_directory = ' + @restore_directory + '|' + 'mdf_file_name = ' + @mdf_file_name + '|' + 'ldf_file_name = ' + @ldf_file_name restore database @db_name from disk = @backup_file_path with move @mdf_logical_name to @mdf_file_name, move @ldf_logical_name to @ldf_file_name -- step 2d: iterate set @index = @index + 1 end close backup_file_cursor deallocate backup_file_cursor -- end try -- begin catch -- print error_message() -- rollback transaction -- return -- end catch -- -- commit transaction end Does anybody have any ideas why this might be happening? Another question: is the transaction code useful ? i.e., if there are 2 databases to be restored, will SQL Server undo the restore of one database if the second restore fails?

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >