Search Results

Search found 787 results on 32 pages for 'augmented reality'.

Page 19/32 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Can a folder on a NAS be made available as a physical drive in VMWare?

    - by asbjornu
    We are currently in the process of moving from a single web server to two load balanced web servers and are facing some challenges we don't quite know how to fix. One of these is that the current single server hosts applications that write stuff to disk. The applications running on the server expects that when something is written to disk it later will in fact exist, so it's important that this premise is fulfilled with the dual server architecture as well. The dual server setup is a couple of VMWare instances with Windows Server 2008 R2 as the guest operating system. Out of the box, these instances does not share any kind of file system, so just moving the applications over would make them break since one instance would write something to the file system that doesn't exist on the other. Thus we need to share a file system between the two virtual servers. Our host has proposed to create a network share on a SAN and map this share individually on each virtual machine. This doesn't work too well due to NTFS permissions, etc., because the share needs to be accessed by several independent web applications that won't even be in the same application pool. The only solution that kind of works is to hard code an "identity" for each web application into its web.config file, but this means password in clear text which doesn't sit well with me. Since the servers are virtual, I'm thinknig: Wouldn't it be possible to make a NAS area available as a physical disk in the gues operating system somehow? Since VMWare has full control of the virtual hardware, you'd think it would be able to "fake" a local hard drive in the virtual machine that in reality is a folder on a NAS, but so far I haven't found anything that states how and if this is possible. So I have to ask the wonderful Server Fault community: Can a folder on a NAS be made available as a physical drive (typical D:) in both of the virtual machines?

    Read the article

  • Problems installing Windows service via Group Policy in a domain

    - by CraneStyle
    I'm reasonably new to Group Policy administration and I'm trying to deploy an MSI installer via Active Directory to install a service. In reality, I'm a software developer trying to test how my service will be installed in a domain environment. My test environment: Server 2003 Domain Controller About 10 machines (between XP SP3, and server 2008) all joined to my domain. No real other setup, or active directory configuration has been done apart from things like getting DNS right. I suspect that I may be missing a step in Group Policy that says I need to grant an explicit permission somewhere, but I have no idea where that might be or what it will say. What I've done: I followed the documentation from Microsoft in How to Deploy Software via Group Policy, so I believe all those steps are correct (I used the UNC path, verified NTFS permissions, I have verified the computers and users are members of groups that are assigned to receive the policy etc). If I deploy the software via the Computer Configuration, when I reboot the target machine I get the following: When the computer starts up it logs Event ID 108, and says "Failed to apply changes to software installation settings. Software changes could not be applied. A previous log entry with details should exist. The error was: An operations error occurred." There are no previous log entries to check, which is weird because if it ever actually tried to invoke the windows installer it should log any sort of failure of my application's installer. If I open a command prompt and manually run: msiexec /qb /i \\[host]\[share]\installer.msi It installs the service just fine. If I deploy the software via the User Configuration, when I log that user in the Event Log says that software changes were applied successfully, but my service isn't installed. However, when deployed via the User configuration even though it's not installed when I go to Control Panel - Add/Remove Programs and click on Add New Programs my service installer is being advertised and I can install/remove it from there. (this does not happen when it's assigned to computers) Hopefully that wall of text was enough information to get me going, thanks all for the help.

    Read the article

  • What is the max connections via remote desktop for a small server?

    - by Jay Wen
    I have a small server running MS Server 2012. The CPU is a Xeon E3-1230 V2 @ 3.30GHz, 4 Cores, 8 Logical Processors, 8 GB RAM. Main HD is a Samsung 840, and the big storage is a 4 disk WD Black Raid 10 Array in a Synology NAS enclusure. My question is: given this hardware, approximately how many users can the system support via "Remote Desktop Connection"? Assume there are no licensing limits. These are not admin users. I know there is a two admin limit. This boils down to: What resources does one remote connection require? RAM? % of the CPU? Networking bandwidth? I guess the base case would be for a conection where the user is inactive or simply browsing cnn. Once you know this, you know how many you could fit on the machine before something is maxed-out. In reality, users would be mostly on Excel (multi-MB spreadsheets). I know the approx. resources currently required by each copy of Excel.

    Read the article

  • Backing up 80G hard drive 1G per day

    - by barrycarter
    I want to securely backup my 80G HD, but doing a complete backup takes forever and slows down my machine, so I want to backup just 1G per day. Details: % First hurdle: on the first day, I want to backup the "first" 1G of the hard drive. Of course, there really is no "first" 1G on a hard drive. % After 80 days, I'll have my whole HD backed up... assuming none of my files ever change, which of course they do. So the backup plan/program must also catch file creation/changes as they come along. % The backups must be consistent, in that I can restore my system by restoring the backups sequentially. In other words, "dd if=/harddrive" probably won't work. % The backups should encrypt file contents AND names, but I don't see this as a major hurdle. % Once the backup has backed up everything (even changed files), it can re-backup the first 1G on my hard drive. Even though this backup is redundant, that's OK, because I always want to be backing up something (eg, if I'm backing up to optical media, the older media might start going corrupt). Is there a magic backup plan/program that does this? In reality, I want to do this for multiple machines with multiple drives each, but think that solving the above will solve the general case.

    Read the article

  • Getting PAM/user info into php - something like Net_Finger instead of a db?

    - by digitaltoast
    I've got a very small user group who just need to login, upload, check and then move specific files to a different area when ready. Right now, I use the nginx PAM auth module to log them in against their unix accounts. As their login is their home directory, I've already got the info to send the uploads to the right area - one line of php and no database needed. But I'm maintaining a separate DB just so PHP can welcome them, grab their email and send them an email when processed. Yes, sure I could use nosql or sqlite instead so as to not need a whole mysql install. But it occurred to me that as I've got all these blank user fields for phone numbers I could populate with any data, that I could use something like php's Net_Finger. Which failed for me with: sudo pear install Net_Finger Starting to download Net_Finger-1.0.1.tgz (1,618 bytes) ....done: 1,618 bytes could not extract the package.xml file from "/build/buildd/php5-5.5.9+dfsg/pear-build-download/Net_Finger-1.0.1.tgz" Download of "pear/Net_Finger" succeeded, but it is not a valid package archive Error: cannot download "pear/Net_Finger" At which point I thought I'd stop, and take a ServerFault reality check - is this a really bad/dangerous/stupid idea just to stop me having to maintain details in two places rather than one? It there a better way? Googling shows that it's not an oft-asked thing, so perhaps with good reason?

    Read the article

  • cpusets not working - threads aren't running in the cpuset I specified?

    - by lori
    I have used cpuset to shield some cpus for exclusive use by some realtime threads. Displaying the cpuset config with the test app RealtimeTest1 running and its tasks moved into the cpusets: $ cset set --list -r cset: Name CPUs-X MEMs-X Tasks Subs Path ------------ ---------- - ------- - ----- ---- ---------- root 0-23 y 0-1 y 279 2 / system 0,2,4,6,8,10 n 0 n 202 0 /system shield 1,3,5,7,9,11 n 1 n 0 2 /shield RealtimeTest1 1,3,5,7 n 1 n 0 4 /shield/RealtimeTest1 thread1 3 n 1 n 1 0 /shield/RealtimeTest1/thread1 thread2 5 n 1 n 1 0 /shield/RealtimeTest1/thread2 main 1 n 1 n 1 0 /shield/RealtimeTest1/main I can interrogate the cpuset filesystem to show that my tasks are supposedly pinned to the cpus I requested: /cpusets/shield/RealtimeTest1 $ for i in `find -name tasks`; do echo $i; cat $i; echo "------------"; done ./thread1/tasks 17651 ------------ ./main/tasks 17649 ------------ ./thread2/tasks 17654 ------------ Further, if I use sched_getaffinity, it reports what cpuset does - that thread1 is on cpu 3 and thread2 is on cpu 5. However, if I run top -p 17649 -H with f,j to bring up the last used cpu, it shows that thread 1 is running on thread 2's cpu, and main thread is running on a cpu in the system cpuset (Note that thread 17654 is running FIFO, hence thread 17651 is blocked) PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ P COMMAND 17654 root -2 0 54080 35m 7064 R 100 0.4 5:00.77 3 RealtimeTest 17649 root 20 0 54080 35m 7064 S 0 0.4 0:00.05 2 RealtimeTest 17651 root 20 0 54080 35m 7064 R 0 0.4 0:00.00 3 RealtimeTest Also, looking at /proc/17649/task to find the last_cpu each of its tasks ran on: /proc/17649/task $ for i in `ls -1`; do cat $i/stat | awk '{print $1 " is on " $(NF - 5)}'; done 17649 is on 2 17651 is on 3 17654 is on 3 So cpuset and sched_getaffinity reports one thing, but reality is another I would say that cpuset is not working? My machine configuration is: $ cat /etc/SuSE-release SUSE Linux Enterprise Server 11 (x86_64) VERSION = 11 PATCHLEVEL = 1 $ uname -a Linux foobar 2.6.32.12-0.7-default #1 SMP 2010-05-20 11:14:20 +0200 x86_64 x86_64 x86_64 GNU/Linux

    Read the article

  • How many guesses per second are possible against an encrypted disk? [closed]

    - by HappyDeveloper
    I understand that guesses per second depends on the hardware and the encryption algorithm, so I don't expect an absolute number as answer. For example, with an average machine you can make a lot (thousands?) of guesses per second for a hash created with a single md5 round, because md5 is fast, making brute force and dictionary attacks a real danger for most passwords. But if instead you use bcrypt with enough rounds, you can slow the attack down to 1 guess per second, for example. 1) So how does disk encryption usually work? This is how I imagine it, tell me if it is close to reality: When I enter the passphrase, it is hashed with a slow algorithm to generate a key (always the same?). Because this is slow, brute force is not a good approach to break it. Then, with the generated key, the disk is unencrypted on the fly very fast, so there is not a significant performance lose. 2) How can I test this with my own machine? I want to calculate the guesses per second my machine can make. 3) How many guesses per second are possible against an encrypted disk with the fastest PC ever so far?

    Read the article

  • Which would be more reliable for data archival - SD card or a generic USB thumbdrive?

    - by Visitor
    I've been thinking lately what should I preferably use for data storage and archival. I will say in advance that I do not use flash memory as the only storage media - I also keep my data on the hard drives and optical disks - flash memory is but one of the several backup solutions that duplicate each other. For the flash memory however I do have a choice - to use a generic USB thumbdrive or a SD card. Are there any indications that SD cards may be better and more reliable? From browsing people's review on the web I see that many complaints about USB sticks have to do with them completely failing, losing file system and stop being recognized by the OS. At the same time, most of the complaints for SD cards deal with just write speeds not holding up to the promise - failure reports are but a portions of those for the USB sticks. Are SD cards indeed more reliable? Am I also correct in my assumptions that SD cards use higher grade NAND chips than USB thumbdrives? At least, for class 10 cards, because the specification dictates the minimum performance and the manufacturers have to preselect better chips. While it is common for USB sticks to promise high speeds "up to XX MB/sec" but the reality is they very often deliver speeds 2-3 times less than promised. Do SD cards get better NAND chips and USB thumbdrives receive the discarded chips? Any thoughts would be appreciated.

    Read the article

  • Retrieve Performance Data from SOA Infrastructure Database

    - by fip
    My earlier blog posting shows how to enable, retrieve and interpret BPEL engine performance statistics to aid performance troubleshooting. The strength of BPEL engine statistics at EM is its break down per request. But there are some limitations with the BPEL performance statistics mentioned in that blog posting: The statistics were stored in memory instead of being persisted. To avoid memory overflow, the data are stored to a buffer with limited size. When the statistic entries exceed the limitation, old data will be flushed out to give ways to new statistics. Therefore it can only keep the last X number of entries of data. The statistics 5 hour ago may not be there anymore. The BPEL engine performance statistics only includes latencies. It does not provide throughputs. Fortunately, Oracle SOA Suite runs with the SOA Infrastructure database and a lot of performance data are naturally persisted there. It is at a more coarse grain than the in-memory BPEL Statistics, but it does have its own strengths as it is persisted. Here I would like offer examples of some basic SQL queries you can run against the infrastructure database of Oracle SOA Suite 11G to acquire the performance statistics for a given period of time. You can run it immediately after you modify the date range to match your actual system. 1. Asynchronous/one-way messages incoming rates The following query will show number of messages sent to one-way/async BPEL processes during a given time period, organized by process names and states select composite_name composite, state, count(*) Count from dlv_message where receive_date >= to_timestamp('2012-10-24 21:00:00','YYYY-MM-DD HH24:MI:SS') and receive_date <= to_timestamp('2012-10-24 21:59:59','YYYY-MM-DD HH24:MI:SS') group by composite_name, state order by Count; 2. Throughput of BPEL process instances The following query shows the number of synchronous and asynchronous process instances created during a given time period. It list instances of all states, including the unfinished and faulted ones. The results will include all composites cross all SOA partitions select state, count(*) Count, composite_name composite, component_name,componenttype from cube_instance where creation_date >= to_timestamp('2012-10-24 21:00:00','YYYY-MM-DD HH24:MI:SS') and creation_date <= to_timestamp('2012-10-24 21:59:59','YYYY-MM-DD HH24:MI:SS') group by composite_name, component_name, componenttype order by count(*) desc; 3. Throughput and latencies of BPEL process instances This query is augmented on the previous one, providing more comprehensive information. It gives not only throughput but also the maximum, minimum and average elapse time BPEL process instances. select composite_name Composite, component_name Process, componenttype, state, count(*) Count, trunc(Max(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) MaxTime, trunc(Min(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) MinTime, trunc(AVG(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) AvgTime from cube_instance where creation_date >= to_timestamp('2012-10-24 21:00:00','YYYY-MM-DD HH24:MI:SS') and creation_date <= to_timestamp('2012-10-24 21:59:59','YYYY-MM-DD HH24:MI:SS') group by composite_name, component_name, componenttype, state order by count(*) desc;   4. Combine all together Now let's combine all of these 3 queries together, and parameterize the start and end time stamps to make the script a bit more robust. The following script will prompt for the start and end time before querying against the database: accept startTime prompt 'Enter start time (YYYY-MM-DD HH24:MI:SS)' accept endTime prompt 'Enter end time (YYYY-MM-DD HH24:MI:SS)' Prompt "==== Rejected Messages ===="; REM 2012-10-24 21:00:00 REM 2012-10-24 21:59:59 select count(*), composite_dn from rejected_message where created_time >= to_timestamp('&&StartTime','YYYY-MM-DD HH24:MI:SS') and created_time <= to_timestamp('&&EndTime','YYYY-MM-DD HH24:MI:SS') group by composite_dn; Prompt " "; Prompt "==== Throughput of one-way/asynchronous messages ===="; select state, count(*) Count, composite_name composite from dlv_message where receive_date >= to_timestamp('&StartTime','YYYY-MM-DD HH24:MI:SS') and receive_date <= to_timestamp('&EndTime','YYYY-MM-DD HH24:MI:SS') group by composite_name, state order by Count; Prompt " "; Prompt "==== Throughput and latency of BPEL process instances ====" select state, count(*) Count, trunc(Max(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) MaxTime, trunc(Min(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) MinTime, trunc(AVG(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) AvgTime, composite_name Composite, component_name Process, componenttype from cube_instance where creation_date >= to_timestamp('&StartTime','YYYY-MM-DD HH24:MI:SS') and creation_date <= to_timestamp('&EndTime','YYYY-MM-DD HH24:MI:SS') group by composite_name, component_name, componenttype, state order by count(*) desc;  

    Read the article

  • IBM Keynote: (hardware,software)–>{IBM.java.patterns}

    - by Janice J. Heiss
    On Sunday evening, September 30, 2012, Jason McGee, IBM Distinguished Engineer and Chief Architect Cloud Computing, along with John Duimovich IBM Distinguished Engineer and Java CTO, gave an information- and idea-rich keynote that left Java developers with much to ponder.Their focus was on the challenges to make Java more efficient and productive given the hardware and software environments of 2012. “One idea that is very interesting is the idea of multi-tenancy,” said McGee, “and how we can move up the spectrum. In traditional systems, we ran applications on dedicated middleware, operating systems and hardware. A lot of customers still run that way. Now people introduce hardware virtualization and share the hardware. That is good but there is a lot more we can do. We can share middleware and the application itself.” McGee challenged developers to better enable the Java language to function in these higher density models. He spoke about the need to describe patterns that help us grasp the full environment that an application needs, whether it’s a web or full enterprise application. Developers need to understand the resources that an application interacts with in a way that is simple and straightforward. The task is to then automate that deployment so that the complexity of infrastructure can be by-passed and developers can live in a simpler world where the cloud can automatically configure the needed environment. McGee argued that the key, something IBM has been working on, is to use a simpler pattern that allows a cloud-based architecture to embrace the entire infrastructure required for an application and make it highly available, scalable and able to recover from failure. The cloud-based architecture would automate the complexity of setting up and managing the infrastructure. IBM has been trying to realize this vision for customers so they can describe their Java application environment simply and allow the cloud to automate the deployment and management of applications. “The point,” explained McGee, “is to package the executable used to describe applications, to drop it into a shared system and let that system provide some intelligence about how to deploy and manage those applications.”John Duimovich on Improvements in JavaMcGee then brought onstage IBM’s Distinguished Engineer and CTO for Java, John Duimovich, who showed the audience ways to deploy Java applications more efficiently.Duimovich explained that, “When you run lots of copies of Java in the cloud or any hypervisor virtualized system, there are a lot of duplications of code and jar files. IBM has a facility called ‘shared classes’ where we put shared code, read only artefacts in a cache that is sharable across hypervisors.” By putting JIT code in ahead of time, he explained that the application server will use 20% less memory and operate 30% faster.  He described another example of how the JVM allows for the maximum amount of sharing that manages the tenants and file sockets and memory use through throttling and control. Duimovich touched on the “thin is in” model and IBM’s Liberty Profile and lightweight runtime for the cloud, which allows for greater efficiency in interacting with the cloud.Duimovich discussed the confusion Java developers experience when, for example, the hypervisor tells them that that they have 8 and then 4 and then 16 cores. “Because hypervisors are virtualized, they can change based on resource needs across the hypervisor layer. You may have 10 instances of an operation system and you may need to reallocate memory, " explained Duimovich.  He showed how to resize LPARs, reallocate CPUs and migrate applications as needed. He explained how application servers can resize thread pools and better use resources based on information from the hypervisors.Java Challenges in Hardware and SoftwareMcGee ended the keynote with a summary of upcoming hardware and software challenges for the Java platform. He noted that one reason developers love Java is it allows them to ignore differences in hardware. He stated that the most important things happening in hardware were in network and storage – in developments such as the speed of SSD, the exploitation of high-speed, low-latency networking, and recent developments such as storage-class memory, and non-volatile main memory. “So we are challenged to maintain the benefits of Java and the abstraction it provides from hardware while still exploiting the new innovations in hardware,” said McGee.McGee discussed transactional messaging applications where developers send messages transactionally persist a message to storage, something traditionally done by backing messages on spinning disks, something mostly outdated. “Now,” he pointed out, “we would use SSD and store it in Flash and get 70,000 messages a second. If we stored it using a PCI express-based flash memory device, it is still Flash but put on a PCI express bus on a card closer to the CPU. This way I get 300,000 messages a second and 25% improvement in latency.” McGee’s central point was that hardware has a huge impact on the performance and scalability of applications. New technologies are enabling developers to build classes of Java applications previously unheard of. “We need to be able to balance these things in Java – we need to maintain the abstraction but also be able to exploit the evolution of hardware technology,” said McGee. According to McGee, IBM's current focus is on systems wherein hardware and software are shipped together in what are called Expert Integrated Systems – systems that are pre-optimized, and pre-integrated together. McGee closed IBM’s engaging and thought-provoking keynote by pointing out that the use of Java in complex applications is increasingly being augmented by a host of other languages with strong communities around them – JavaScript, JRuby, Scala, Python and so forth. Java developers now must understand the strengths and weaknesses of such newcomers as applications increasingly involve a complex interconnection of languages.

    Read the article

  • Performance triage

    - by Dave
    Folks often ask me how to approach a suspected performance issue. My personal strategy is informed by the fact that I work on concurrency issues. (When you have a hammer everything looks like a nail, but I'll try to keep this general). A good starting point is to ask yourself if the observed performance matches your expectations. Expectations might be derived from known system performance limits, prototypes, and other software or environments that are comparable to your particular system-under-test. Some simple comparisons and microbenchmarks can be useful at this stage. It's also useful to write some very simple programs to validate some of the reported or expected system limits. Can that disk controller really tolerate and sustain 500 reads per second? To reduce the number of confounding factors it's better to try to answer that question with a very simple targeted program. And finally, nothing beats having familiarity with the technologies that underlying your particular layer. On the topic of confounding factors, as our technology stacks become deeper and less transparent, we often find our own technology working against us in some unexpected way to choke performance rather than simply running into some fundamental system limit. A good example is the warm-up time needed by just-in-time compilers in Java Virtual Machines. I won't delve too far into that particular hole except to say that it's rare to find good benchmarks and methodology for java code. Another example is power management on x86. Power management is great, but it can take a while for the CPUs to throttle up from low(er) frequencies to full throttle. And while I love "turbo" mode, it makes benchmarking applications with multiple threads a chore as you have to remember to turn it off and then back on otherwise short single-threaded runs may look abnormally fast compared to runs with higher thread counts. In general for performance characterization I disable turbo mode and fix the power governor at "performance" state. Another source of complexity is the scheduler, which I've discussed in prior blog entries. Lets say I have a running application and I want to better understand its behavior and performance. We'll presume it's warmed up, is under load, and is an execution mode representative of what we think the norm would be. It should be in steady-state, if a steady-state mode even exists. On Solaris the very first thing I'll do is take a set of "pstack" samples. Pstack briefly stops the process and walks each of the stacks, reporting symbolic information (if available) for each frame. For Java, pstack has been augmented to understand java frames, and even report inlining. A few pstack samples can provide powerful insight into what's actually going on inside the program. You'll be able to see calling patterns, which threads are blocked on what system calls or synchronization constructs, memory allocation, etc. If your code is CPU-bound then you'll get a good sense where the cycles are being spent. (I should caution that normal C/C++ inlining can diffuse an otherwise "hot" method into other methods. This is a rare instance where pstack sampling might not immediately point to the key problem). At this point you'll need to reconcile what you're seeing with pstack and your mental model of what you think the program should be doing. They're often rather different. And generally if there's a key performance issue, you'll spot it with a moderate number of samples. I'll also use OS-level observability tools to lock for the existence of bottlenecks where threads contend for locks; other situations where threads are blocked; and the distribution of threads over the system. On Solaris some good tools are mpstat and too a lesser degree, vmstat. Try running "mpstat -a 5" in one window while the application program runs concurrently. One key measure is the voluntary context switch rate "vctx" or "csw" which reflects threads descheduling themselves. It's also good to look at the user; system; and idle CPU percentages. This can give a broad but useful understanding if your threads are mostly parked or mostly running. For instance if your program makes heavy use of malloc/free, then it might be the case you're contending on the central malloc lock in the default allocator. In that case you'd see malloc calling lock in the stack traces, observe a high csw/vctx rate as threads block for the malloc lock, and your "usr" time would be less than expected. Solaris dtrace is a wonderful and invaluable performance tool as well, but in a sense you have to frame and articulate a meaningful and specific question to get a useful answer, so I tend not to use it for first-order screening of problems. It's also most effective for OS and software-level performance issues as opposed to HW-level issues. For that reason I recommend mpstat & pstack as my the 1st step in performance triage. If some other OS-level issue is evident then it's good to switch to dtrace to drill more deeply into the problem. Only after I've ruled out OS-level issues do I switch to using hardware performance counters to look for architectural impediments.

    Read the article

  • ADF Business Components

    - by Arda Eralp
    ADF Business Components and JDeveloper simplify the development, delivery, and customization of business applications for the Java EE platform. With ADF Business Components, developers aren't required to write the application infrastructure code required by the typical Java EE application to: Connect to the database Retrieve data Lock database records Manage transactions   ADF Business Components addresses these tasks through its library of reusable software components and through the supporting design time facilities in JDeveloper. Most importantly, developers save time using ADF Business Components since the JDeveloper design time makes typical development tasks entirely declarative. In particular, JDeveloper supports declarative development with ADF Business Components to: Author and test business logic in components which automatically integrate with databases Reuse business logic through multiple SQL-based views of data, supporting different application tasks Access and update the views from browser, desktop, mobile, and web service clients Customize application functionality in layers without requiring modification of the delivered application The goal of ADF Business Components is to make the business services developer more productive.   ADF Business Components provides a foundation of Java classes that allow your business-tier application components to leverage the functionality provided in the following areas: Simplifying Data Access Design a data model for client displays, including only necessary data Include master-detail hierarchies of any complexity as part of the data model Implement end-user Query-by-Example data filtering without code Automatically coordinate data model changes with business services layer Automatically validate and save any changes to the database   Enforcing Business Domain Validation and Business Logic Declaratively enforce required fields, primary key uniqueness, data precision-scale, and foreign key references Easily capture and enforce both simple and complex business rules, programmatically or declaratively, with multilevel validation support Navigate relationships between business domain objects and enforce constraints related to compound components   Supporting Sophisticated UIs with Multipage Units of Work Automatically reflect changes made by business service application logic in the user interface Retrieve reference information from related tables, and automatically maintain the information when the user changes foreign-key values Simplify multistep web-based business transactions with automatic web-tier state management Handle images, video, sound, and documents without having to use code Synchronize pending data changes across multiple views of data Consistently apply prompts, tooltips, format masks, and error messages in any application Define custom metadata for any business components to support metadata-driven user interface or application functionality Add dynamic attributes at runtime to simplify per-row state management   Implementing High-Performance Service-Oriented Architecture Support highly functional web service interfaces for business integration without writing code Enforce best-practice interface-based programming style Simplify application security with automatic JAAS integration and audit maintenance "Write once, run anywhere": use the same business service as plain Java class, EJB session bean, or web service   Streamlining Application Customization Extend component functionality after delivery without modifying source code Globally substitute delivered components with extended ones without modifying the application   ADF Business Components implements the business service through the following set of cooperating components: Entity object An entity object represents a row in a database table and simplifies modifying its data by handling all data manipulation language (DML) operations for you. These are basically your 1 to 1 representation of a database table. Each table in the database will have 1 and only 1 EO. The EO contains the mapping between columns and attributes. EO's also contain the business logic and validation. These are you core data services. They are responsible for updating, inserting and deleting records. The Attributes tab displays the actual mapping between attributes and columns, the mapping has following fields: Name : contains the name of the attribute we expose in our data model. Type : defines the data type of the attribute in our application. Column : specifies the column to which we want to map the attribute with Column Type : contains the type of the column in the database   View object A view object represents a SQL query. You use the full power of the familiar SQL language to join, filter, sort, and aggregate data into exactly the shape required by the end-user task. The attributes in the View Objects are actually coming from the Entity Object. In the end the VO will generate a query but you basically build a VO by selecting which EO need to participate in the VO and which attributes of those EO you want to use. That's why you have the Entity Usage column so you can see the relation between VO and EO. In the query tab you can clearly see the query that will be generated for the VO. At this stage we don't need it and just use it for information purpose. In later stages we might use it. Application module An application module is the controller of your data layer. It is responsible for keeping hold of the transaction. It exposes the data model to the view layer. You expose the VO's through the Application Module. This is the abstraction of your data layer which you want to show to the outside word.It defines an updatable data model and top-level procedures and functions (called service methods) related to a logical unit of work related to an end-user task. While the base components handle all the common cases through built-in behavior, customization is always possible and the default behavior provided by the base components can be easily overridden or augmented. When you create EO's, a foreign key will be translated into an association in our model. It defines the type of relation and who is the master and child as well as how the visibility of the association looks like. A similar concept exists to identify relations between view objects. These are called view links. These are almost identical as association except that a view link is based upon attributes defined in the view object. It can also be based upon an association. Here's a short summary: Entity Objects: representations of tables Association: Relations between EO's. Representations of foreign keys View Objects: Logical model View Links: Relationships between view objects Application Model: interface to your application  

    Read the article

  • CSS: Freeze table header and first column, *but only on certain axes*

    - by Mega Matt
    Hello all, I have a variation on a common question, and I'll try to explain it as best I can. It may take some visualization on your part. I have an HTML table (in reality there are tables within tables within divs within tables -- I'm using the JSGantt plugin). I'd like for the table header to be frozen only when I scroll down on the y-axis, but if I need to scroll right to see more data, I would like it to scroll right. Meanwhile, as I scroll down (with the header row staying put), I'd like the first column of the table to scroll down with me. But when I scroll right, I want the first column to stay put (but as I mentioned above, the header row to scroll with me). So essentially I've frozen the first column only on the x-axis and the header row only on the y-axis. I'll stop there for now. If anyone needs more clarification I can try to explain. I've tried this multiple ways, but I'm convinced that it may not be possible without some serious javascript. The table, by the way, is contained within an outer div with set dimensions, hence the need for me to scroll the data. Any help you can provide would be greatly appeciated. Thanks very much.

    Read the article

  • Complex SQL Query similar to a z order problem

    - by AaronLS
    I have a complex SQL problem in MS SQL Server, and in drawing on a piece of paper I realized that I could think of it as a single bar filled with rectangles, each rectangle having segments with different Z orders. In reality it has nothing to do with z order or graphics at all, but more to do with some complex business rules that would be difficult to explain. Howoever, if anyone has ideas on how to solve the below that will give me my solution. I have the following data: ObjectID, PercentOfBar, ZOrder (where smaller is closer) A, 100, 6 B, 50, 5 B, 50, 4 C, 30, 3 C, 70, 6 The result of my query that I want is this, in any order: PercentOfBar, ZOrder 50, 5 20, 4 30, 3 Think of it like this, if I drew rectangle A, it would fill 100% of the bar and have a z order of 6. 66666666666 AAAAAAAAAAA If I then layed out rectangle B, consisting of two segments, both segments would cover up rectangle A resulting in the following rendering: 4444455555 BBBBBBBBBB As a rule of thumb, for a given rectangle, it's segments should be layed out such that the highest z order is to the right of the lower Z orders. Finally rectangle C would cover up only portions of Rectangle B with it's 30% segment that is z order 3, which would be on the left. You can hopefully see how the is represented in the output dataset I listed above: 3334455555 CCCBBBBBBB Now to make things more complicated I actually have a 4th column such that this grouping occurs for each key: Input: SomeKey, ObjectID, PercentOfBar, ZOrder (where smaller is closer) X, A, 100, 6 X, B, 50, 5 X, B, 50, 4 X, C, 30, 3 X, C, 70, 6 Y, A, 100, 6 Z, B, 50, 2 Z, B, 50, 6 Z, C, 100, 5 Output: SomeKey, PercentOfBar, ZOrder X, 50, 5 X, 20, 4 X, 30, 3 Y, 100, 6 Z, 50, 2 Z, 50, 5 Notice in the output, the PercentOfBar for each SomeKey would add up to 100%. This is one I know I'm going to be thinking about when I go to bed tonight. Just to be explicit and have a question: What would be a query that would produce the results described above?

    Read the article

  • Uploadify plugin doesn't call Java Servlet

    - by sergionni
    Hello,i just started using Uploadify flash plugin instead of standard HTML UI. And met the next problem: when I click "Upload Files" link,that progress is shown and "completed" status is appeared, but in reality - it didn't happened anything,Java Servlet isn't called from backend. There is upload servlet and uploading performed next way earlier: < form enctype="multipart/form-data" method="post" target="uploadFrame" action="<%= request.getContextPath() %>/uploadFile?portletId=${portletId}&remoteFolder=${remoteFolder}">... After providing Uploadify plugin, UI now looks like: plugin part(configuration): <script> ... oScript.text+= "$j('#uploadify').uploadify({"; oScript.text+= "'uploader' : 'kne-portlets/js/lib/uploadify/scripts/uploadify.swf',"; oScript.text+= "'script' : '<%= request.getContextPath() %>/uploadFile?portletId=${portletId}&remoteFolder=<%= decodedString %>',"; oScript.text+= "'cancelImg': 'kne-portlets/js/lib/uploadify/cancel.png',"; oScript.text+= "'folder' : '<%= decodedString %>',"; oScript.text+= "'queueID' : 'fileQueue',"; oScript.text+= "'auto' : false,"; oScript.text+= "'multi' : false,"; //oScript.text+= "'sizeLimit' : 1000"; oScript.text+= "});"; oScript.text+= "});"; ... </script> 'scripts' parameter here points to Java Servlet on backend <%= decodedString %> is folder path, which value is \\file-srv\demo part for uploading: <input type="file" name="uploadify" id="uploadify" /> <a href="javascript:$j('#uploadify').uploadifyUpload();">Upload Files</a> Where is my fault? 'Script' param in plugin config points to Java Servlet on backend and it's done,but Servlet isn't triggered. error, when 'script' param isn't correct:http://img190.imageshack.us/i/errormm.png/ Thank you for assistance.

    Read the article

  • Is LaTeX worth learning today?

    - by Ender
    I know that LaTeX is big in the world of academia, and was probably a big name in desktop publishing before the glory days of WordPerfect and Microsoft Office but as a Windows user that is interested in the power of LaTeX and the general smoothness of a LaTeX generated page is it really worth learning? In a couple of months I'll be starting my final year in Computer Science and LaTeX has been bounced around the campus by many of the Linux geeks. In reality, is there any need to use it today? What will I actually gain from it and will I enjoy using it? Finally, how does one use LaTeX on a Windows machine? What software do I really need? I've read a couple of guides but many of them seem like overkill. Please help break a LaTeX newbie into the world of professional academic publishing! EDIT: I've toyed with LaTeX for a while, and have even learned that it's pronounced "lay-tech", not "lay-tecks". I'll agree once again with the accepted answer in saying that MiKTeX is the best solution for Windows users.

    Read the article

  • JS: variable inheritance in anonymous functions - scope

    - by tkSimon
    hey guys, someone from doctype sent me here. long story short: var o="before"; x = function() //this needs to be an anonymous function { alert(o); //the variable "o" is from the parent scope }; o="after"; //this chages "o" in the anonymous function x(); //this results in in alert("after"); //which is not the way i want/need it in reality my code is somewhat more complex. my script iterates through many html objects and adds an event listener each element. i do this by declaring an anonymous function for each element and call another function with an ID as argument. that ID is represented by the "o"-variable in this example. after some thinking i understand why it is the way it is, but is there a way to get js to evaluate o as i declare the anonymous function without dealing with the id attribute and fetching my ID from there? my full source code is here: http://pastebin.com/GMieerdw the anonymous function is on line 303

    Read the article

  • Why are my functional tests failing?

    - by Mongus Pong
    I have generated some scaffolding for my rails app. I am running the generated tests and they are failing. for example test "should create area" do assert_difference('Area.count') do post :create, :area => { :name => 'area1' } end assert_redirected_to area_path(assigns(:area)) end This test is failing saying that : 1) Failure: test_should_create_area(AreasControllerTest) [/test/functional/areas_controller_test.rb:16]: "Area.count" didn't change by 1. <3 expected but was <2. There is only one field in the model : name. I am populating this so it cant be because I am failing to populate the only field. I can run the site and create an area with the name 'area1'. So reality is succeeding, but the test is failing. I cant ask why its failing, because Im sure theres not enough information here for anyone here to know why. Im just stuck at knowing what avenues to go down to work out why the test is failing. Even putting puts into the code dont print out... What steps can I take to track this down?

    Read the article

  • Linq, Left Join and Dates...

    - by BitFiddler
    So my situation is that I have a linq-to-sql model that does not allow dates to be null in one of my tables. This is intended, because the database does not allow nulls in that field. My problem, is that when I try to write a Linq query with this model, I cannot do a left join with that table anymore because the date is not a 'nullable' field and so I can't compare it to "Nothing". Example: There is a Movie table, {ID,MovieTitle}, and a Showings table, {ID,MovieID,ShowingTime,Location} Now I am trying to write a statement that will return all those movies that have no showings. In T.SQL this would look like: Select m.* From Movies m Left Join Showings s On m.ID = s.MovieID Where s.ShowingTime is Null Now in this situation I could test for Null on the 'Location' field but this is not what I have in reality (just a simplified example). All I have are non-null dates. I am trying to write in Linq: From m In dbContext.Movies _ Group Join s In Showings on m.ID Equals s.MovieID into MovieShowings = Group _ From ms In MovieShowings.DefaultIfEmpty _ Where ms.ShowingTime is Nothing _ Select ms However I am getting an error saying 'Is' operator does not accept operands of type 'Date'. Operands must be reference or nullable types. Is there any way around this? The model is correct, there should never be a null in the Showings:ShowTime table. But if you do a left join, and there are no show times for a particular movie, then ShowTime SHOULD be Nothing for that movie... Thanks everyone for your help.

    Read the article

  • Need to replace 3rd party WinForm controls, what's the closet WPF equivalent?

    - by Refracted Paladin
    I am tired of Windows Forms...I just am. I am not trying to start a debate on it I am just bored with it. Unfortunately we have become dependent on 4 controls in DevExpress XtraEditors. I have had nothing but difficulties with them and I want to move on. What I need now is what the closet replacement would be for the 4 controls I am using. Here they are: LookUpEdit - this is a dropdown that filters the dropdown list as you type. MemoExEdit - this is a textbox that 'pops up' a bigger area when it has focus CheckedComboBoxEdit - this is a dropdown of checkboxes. CheckedListBoxControl - this is a nicely columned list box of checkboxes This is a LOB app that has tons of data entry. In reality, the first two are nice but not essential. The second two are essential in that I would either need to replicate the functionality or change the way the users are interacting with that particular data. I am looking for help in replicating these in a WPF environment with existing controls(codeplex etc) or in straight XAML. Any code or direction would be greatly appreciated but mostly I am hoping to avoid any commercial 3rd party WPF and would instead like to focus on building them myself(but I need direction) or using Codeplex

    Read the article

  • Error Cannot create an Instance of "ObjectName" in Designer when using <UserControl.Resources>

    - by Mike Bynum
    Hi All, I'm tryihg to bind a combobox item source to a static resource. I'm oversimplfying my example so that it's easy to understand what i'm doing. So I have created a class public class A : ObservableCollection<string> { public A() { IKBDomainContext Context = new IKBDomainContext(); Context.Load(Context.GetIBOptionsQuery("2C6C1Q"), p => { foreach (var item in SkinContext.IKBOptions) { this.Add(item); } }, null); } } So the class has a constructor that populates itself using a domaincontext that gets data from a persisted database. I'm only doing reads on this list so dont have to worry about persisting back. in xaml i add a reference to the namespace of this class then I add it as a usercontrol.resources to the page control. <UserControl.Resources> <This:A x:Key="A"/> </UserControl.Resources> and then i use it this staticresource to bind it to my combobox items source.in reality i have to use a datatemplate to display this object properly but i wont add that here. <Combobox ItemsSource="{StaticResource A}"/> Now when I'm in the designer I get the error: Cannot Create an Instance of "A". If i compile and run the code, it runs just fine. This seems to only affect the editing of the xaml page. What am I doing wrong?

    Read the article

  • Capture keystrokes (e.g., function keys) while a messagebox is up

    - by FastAl
    We have a large WinForms app, and there is a built-in bug reporting system that can be activated during testing via the F5 Key. I am capturing the F5 key with .Net's PreFilterMessage system. This works fine on the main forms, modal dialog boxes, etc. Unfortunately, the program also displays windows messageboxes when it needs to. When there is a bug with that, e.g., wrong text in the messagebox or it shouldn't be there, the messagefilter isn't executed at all when the messagebox is up! I realize I could fix it by either rewriting my own messagebox routine, or kicking off a separate thread that polls GetAsyncKeyState and calls the error reporter from there. However I was hoping for a method that was less of a hack. Here's code that manifests the problem: Public Class Form1 Implements IMessageFilter Private Sub Form1_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Click MsgBox("now, a messagebox is up!") End Sub Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load Application.AddMessageFilter(Me) End Sub Public Function PreFilterMessage(ByRef m As System.Windows.Forms.Message) _ As Boolean Implements IMessageFilter.PreFilterMessage Const VK_F5 As Int32 = &H74 Const WM_KEYDOWN As Integer = &H100 If m.Msg = WM_KEYDOWN And m.WParam.ToInt32 = VK_F5 Then ' In reality code here takes a screenshot, saves the program state, and shows a bug report interface ' IO.File.AppendAllText("c:\bugs.txt", InputBox("Describe the bug:")) End If End Function End Class Many thanks.

    Read the article

  • C++: Retriving values of static const variables at a constructor of a static variable

    - by gilbertc
    I understand that the code below would result segmentation fault because at the cstr of A, B::SYMBOL was not initialized yet. But why? In reality, A is an object that serves as a map that maps the SYMBOLs of classes like B to their respective IDs. C holds this map(A) static-ly such that it can provide the mapping as a class function. The primary function of A is to serve as a map for C that initializes itself at startup. How should I be able to do that without segmentation fault, provided that I can still use B::ID and B::SYMBOL in the code (no #define pls)? Thanks! Gil. class A { public: A() { std::cout<<B::ID<<std::endl; std::cout<<B::SYMBOL<<std::endl; } }; class B { public: static const int ID; static const std::string SYMBOL; } const int B::ID = 1; const std::string B::SYMBOL = "B"; class C { public: static A s_A; }; A C::s_A; int main(int c, char** p) { }

    Read the article

  • Limiting choices from an intermediary ManyToMany junction table in Django

    - by Matthew Rankin
    Background I've created three Django models—Inventory, SalesOrder, and Invoice—to model items in inventory, sales orders for those items, and invoices for a particular sales order. Each sales order can have multiple items, so I've used an intermediary junction table—SalesOrderItems—using the through argument for the ManyToManyField. Also, partial billing of a sales orders is allowed, so I've created a ForeignKey in the Invoice model related to the SalesOrder model, so that a particular sales order can have multiple invoices. Here's where I deviate from what I've normally seen. Instead of relating the Invoice model to the Item model via a ManyToManyField, I've related the Invoice model to the SalesOrderItem intermediary junction table through the intermediary junction table InvoiceItem. I've done this because it better models reality—our invoices are tied to sales orders and can only include items that are tied to that sales order as opposed to any item in inventory. I will admit that it does seem strange having the intermediary junction table of a ManyToManyField related to the intermediary junction table of another ManyToManyField. Question How can I limit the choices available for the invoice_items in the Invoice model to just the sales_order_items of the SalesOrder model for that particular Invoice? (I tried using limit_choices_to= {'sales_order': self.invoice.sales_order}) as part of the item = models.ForeignKey(SalesOrderItem) in the InvoiceItem model, but that didn't work. Am I correct in thinking that limiting the choices for the invoice_items should be handled in the model instead of in a form? Code class Item(models.Model): item_num = models.SlugField(unique=True) default_price = models.DecimalField(max_digits=10, decimal_places=2, blank=True, null=True) class SalesOrderItem(models.Model): item = models.ForeignKey(Item) sales_order = models.ForeignKey('SalesOrder') unit_price = models.DecimalField(max_digits=10, decimal_places=2) quantity = models.DecimalField(max_digits=10, decimal_places=4) class SalesOrder(models.Model): customer = models.ForeignKey(Party) so_num = models.SlugField(max_length=40, unique=True) sales_order_items = models.ManyToManyField(Item, through=SalesOrderItem) class InvoiceItem(models.Model): item = models.ForeignKey(SalesOrderItem) invoice = models.ForeignKey('Invoice') unit_price = models.DecimalField(max_digits=10, decimal_places=2) quantity = models.DecimalField(max_digits=10, decimal_places=4) class Invoice(models.Model): invoice_num = models.SlugField(max_length=25) sales_order = models.ForeignKey(SalesOrder) invoice_items = models.ManyToManyField(SalesOrderItem, through='InvoiceItem')

    Read the article

  • rails i18n - translating text with links inside.

    - by egarcia
    Hi there! I'd like to i18n a text that looks like this: Already signed up? Log in! Note that there is a link on the text. On this example it points to google - in reality it will point to my app's log_in_path. I've found two ways of doing this, but none of them looks "right". The first way I know involves having this my en.yml: log_in_message: "Already signed up? <a href='{{url}}'>Log in!</a>" And in my view: <p> <%= t('log_in_message', :url => login_path) %> </p> This works, but having the <a href=...</a> part on the en.yml doesn't look very clean to me. The other option I know is using localized views - login.en.html.erb, and login.es.html.erb. This also doesn't feel right since the only different line would be the aforementioned one; the rest of the view (~30 lines) would be repeated for all views. It would not be very DRY. I guess I could use "localized partials" but that seems too cumberstone; I think I prefer the first option to having so many tiny view files. So my question is: is there a "proper" way to implement this?

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >