Search Results

Search found 16914 results on 677 pages for 'single threaded'.

Page 353/677 | < Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >

  • Joining H264 *without* re-encoding

    - by jdmuys
    I have two halves of a single show in two .MP4 files, encoded in H264. I would like to join them without re-encoding. Is this possible? I managed to create a joined video as a Quicktime file (.mov) using Quicktime Pro, but then Quicktime Pro will not convert it back to .MP4 without re-encoding. This may be because looking inside the .mov file, the two H264 videos are in there still separated as individual "objects". I am also struggling with MPEG StreamClip without reaching a real solution. But I may have missed something. Note that I do not have the same issue with MPEG2 files. I can export them to a .MPEG container or a .TS file for example, and then I can join them without re-encoding using MPEG Streamclip. Any suggestion welcome, preferably using Mac software.

    Read the article

  • DVI-D Splitter Not Working with GeForce 8400gs

    - by jimdrang
    I have a GeForce 8400gs and it has a DVI and VGA port on the back. I was using dual monitors with one VGA and one DVI cable. I wanted both displays to be digital so I bought a DVI-D splitter and put one DVI cable in each monitor, connected them to the splitter and put the single merged connection in the back of the cards DVI connection. It will not recognize the second monitor (I'm not even sure how it determined which one was the first monitor). The tech specs state that it supports "Two dual-link DVI outputs for digital flat panel display resolutions up to 2560x1600" http://www.nvidia.com/object/geforce_8400_tech_specs.html. Do I need a different converter or is my only option for dual monitors with this card one VGA, one DVI?

    Read the article

  • Upgrade no raid server to raid

    - by AZee
    I have just learned that our PDC has a single drive with 2 partitions. I also know that this drive has bad blocks as recorded in the event log. What I would like to do is to convert this to a RAID solution with a nice balance between economy and performance. I will admit that I have only configured servers with RAID from scratch, and have no experience upgrading an existing system into a RAID system. In fact, I'm not sure it is even possible. Since this is the PDC for 350+ workstations downtime is important. I'd like to hear from other System Administators how they would tackle this and their recommendations for all devices. At this time it seems to me that I can replace the existing drive and then restore from backup or install a controller, drives, configure the RAID an basically start from scratch. Thank you for taking your time. ~AZee

    Read the article

  • Server 2008 Active directory file sharing Restriction

    - by Usman
    My network scenario is: Two location connected each other through Wireless towers. 1st network ip address are 10.10.10.1 to 10.10.10.150 2nd network ip address are 10.10.10.200 to 10.10.10.254 Both location using their own ACTIVE DIRECTORY (MS SERVER 2008) and share their data with each other network through file sharing. My Problem is that every single client can explore the whole network. by simply typing \computername. i want to implement some restriction on network regarding file sharing. i want to restrict 1st Network Clients to open 2nd network Computer or server. Is there have any Group Policy or Folder permission scenario that i implement on my both network..

    Read the article

  • Business Forces: SOA Adoption

    The only constant in today’s business environment is change. Businesses that continuously foresee change and adapt quickly will gain market share and increased growth. In our ever growing global business environment change is driven by data in regards to collecting, maintaining, verifying and distributing data.  Companies today are made and broken over data. Would anyone still use Google if they did not have one of the most accurate search indexes on the internet? No, because their value is in their data and the quality of their data. Due to the increasing focus on data, companies have been adopting new methodologies for gaining more control over their data while attempting to reduce the costs of maintaining it. In addition, companies are also trying to reduce the time it takes to analyze data in regards to various market forces to foresee changes prior to them actually occurring.   Benefits of Adopting SOA Services can be maintained separately from other services and applications so that a change in one service will only affect itself and client services or applications. The advent of services allows for system functionality to be distributed across a network or multiple networks. The costs associated with maintain business functionality is much higher in standard application development over SOA due to the fact that one Services can be maintained and shared to other applications instead of multiple instances of business functionality being duplicated via hard coding in to several applications. When multiple applications use a single service for a specific business function then the all of the data being processed will be consistent in terms of quality and accuracy through the applications. Disadvantages of Adopting SOA Increased initial costs and timelines are associated with SOA due to the fact that services need to be created as well as applications need to be modified to call the services In order for an SOA project to be successful the project must obtain company and management support in order to gain the proper exposure, funding, and attention. If SOA is new to a company they must also support the proper training in order for the project to be designed, and implemented correctly. References: Tews, R. (2007). Beyond IT: Exploring the Business Value of SOA. SOA Magazine Issue XI.

    Read the article

  • slow virtualbox guest

    - by ecoologic
    I run a guest ubuntu 12.04 on a host ubuntu 12.04, with virtual box, and the guest is much, much slower than the host (ALT+TAB costs 4-5secs). I had a look around and I found contradicting opinions on virtualbox vs vmware (free), so I taught to keep the former. Both systems are updated, I installed the additions on the guest and I evenly split memory and video memory (64mb) between guest and host. I am running a toshiba m200 laptop with 4GB ram and shared video memory. The host bios does not include a configuration option for machine virtualization. I have 2 cpus and I can't give them both to the vm. Is there anything I overlooked that could solve my problem? Feel free to ask for more info, and thank you for any help. EDIT Idling with the monitor open the (single) guest cpu never gets below 55% and could raise to 80 - 90% just moving the mouse around, opening ff will cause the monitor to run 100% in the guest, while the host shows that both cpus are evenly working around 60%. My cpu is Intel® Core™2 Duo CPU T5450 @ 1.66GHz × 2. If this is not a configuration problem, does it mean my machine is too weak for virtualization?

    Read the article

  • Do all mods simply alter game files? [on hold]

    - by Starkers
    When you install some mods you drag certain files into your game directory and replace the files. Other mods, though, come with an installer where you can set parameters first. Does the installer then go and automatically replace the certain files? At the end of the day, is that all the installation of any mod is? Is the installation of a mod simply the replacement of certain files inside the game's root directory? Do mods exist which don't fit the above statement? That install outside the game's root? Why do they do this? All the mods I can think of do just replace certain files inside the game's root. However, I know Team Fortress was spawned from a multiplayer halflife 1 mod. Do you reckon that mod installed files outside the root to enable multiplayer via a network for a single player game? How rare are these mods? Or do they not even exist? Do even extensive mods make all their changes inside the root?

    Read the article

  • OSB, Service Callouts and OQL

    - by Sabha
    Oracle Fusion Middleware customers use Oracle Service Bus (OSB) for virtualizing Service endpoints and implementing stateless service orchestrations. Behind the performance and speed of OSB, there are a couple of key design implementations that can affect application performance and behavior under heavy load. One of the heavily used feature in OSB is the Service Callout pipeline action for message enrichment and invoking multiple services as part of one single orchestration. Overuse of this feature, without understanding its internal implementation, can lead to serious problems. This series will delve into OSB internals, the problem associated with usage of Service Callout under high loads, diagnosing it via thread dump and heap dump analysis using tools like ThreadLogic and OQL (Object Query Language) and resolving it. The first section in the series will mainly cover the threading model used internally by OSB for implementing Route Vs. Service Callouts. The second section of the "OSB, Service Callouts and OQL" blog posting will delve into thread dump analysis of OSB server and detecting threading issues relating to Service Callout and using Heap Dump and OQL to identify the related Proxies and Business services involved. The final section of the series will focus on the corrective action to avoid Service Callout related OSB serer hangs. Before we dive into the solution, we need to briefly discus about Work Managers in WLS. Please refer to the blog posting for more details.

    Read the article

  • DFS: Error creating domain-based namespace "The namespace cannot be queried. The RPC server is unavailable."

    - by Scolytus
    I try to create a domain-based namespace but when I hit "next" on the "choose type wizard page" I get the message: The namespace cannot be queried. The RPC server is unavailable. I found one article regarding this problem, and indeed the DNS resolution of the DomainDNSName was flawed. But although I fixed this DNS issue, I still get this message. A stand-alone namespace can be created. Any ideas how to solve this problem? Is it possible that this is related to the fact that I'm using a single-lable domain name?

    Read the article

  • fastcgi-mono-server with Nginx is much slower than xsp4

    - by marxin
    We started testing our MVC4 app on xsp4 server compiled with mono-3.0.3, speed was enough and we decided to set up production fastcgi-mono-server4 (version 2.11.0.0) with nginx (1.2.6-r1). Single query that loads some JSON query took ~200ms on XSP4, but Nginx serves the query in about 1.2s and I am wondering where could be such a slow down? I followed nginx configuration: http://www.mono-project.com/FastCGI_Nginx and fastcgi-mono-server4 uses socket for listening nginx. Do you have any ideas how to log some time stamp which will help me? Thanks

    Read the article

  • Pair Programming, for or against? [on hold]

    - by user1037729
    I believe it has many advantages over individual programming: Pros By pairing senior with relatively junior staff, the more junior can get up to speed with both project and computing experience, and the senior will re-think the problem in order to communicate with the junior, thus re-checking his own thinking (rubber duck principle!). At least 2 people will know about any single piece of work, if one person is away the other can cover, or if some one leaves a project knowledge transfer is easier. Two brains on a complex task is more effective, communication keeps the work free flowing and provides redundancy in decision making. Code is effectively reviewed as its being written, no need for a separate reviewing phase which requires a context switch as someone who has not been working on the piece in question would be required to understand and review the related code. Reviewing code on your own which you haven't written or architected is not fun, hence counter productive. Cons Less bandwith for performing tasks, lets say we have 4 devs, pair programming requires 2 devs per task, so we would be doing 2 tasks concurrently as a posed to 4. I believe this "Con" does not stand up as the pair programmed task would complete sooner and comes with a review built in for free! Ie the pair programming task would be more efficient and thus free up resources earlier. Less flexibility to chop and change tasks as two developers are tied into a task, when flexibility is required this could be a problem.

    Read the article

  • Blackberry Access to Powered Down Exchange

    - by Sam Cogan
    I work with a company (it's more of a charity really) who have a single Exchange Server, with a few Blackberry users, who download emails via POP3 or IMAP. They are in a developing country and so every night they turn off the Exchange server to save power. However, they now want the Blackberry users to be able to get mail at night. They have a Linux server (a rented VPS), so they are considering having the mail delivered here and then pulling this mail via a POP connector into Exchange. Therefore at night, the BES users (who will now be pulling their POP email from the Linux server) can still get their mail. Can anyone think of a better solution to this problem that I may have missed? Unfortunately there is no convincing the company to leave the machine on overnight.

    Read the article

  • Is there a way to automate testing for a GNS3 network topology?

    - by Chedy2149
    I'm working on an MPLS backbone topology using GNS3, this topology includes configuration of several networking technologies such as IPv6, BGP, OSPF, IPv6 to IPv4 dynamic tunneling, MPLS based VPNs, MPLS based traffic engineering... So I'm a beginner with basic knowledge in networking (Ipv4 addressing, basic routing and switching) and I can't just easly test if my topology works or not, moreover fiddling with configuration I've noticed that sometimes when getting MPLS config right and missing BGP config all my topology is messed up. Given that I use Cisco 7200 series routers, is there any way to build a test harness for my topology in order to guarentee correctness with a single push button automated process?

    Read the article

  • Grub2 won't detect Ubuntu 11.10 OS after reinstalling Win XP hal.dll.

    - by yoopian
    Hi I'm an Ubuntu newbie here. I've installed ubuntu 11.10 to dual boot on a single HDD. I did a manual partition and basically forgot all the on what sda my /boot partition is. My installation worked out just fine and I tried to install updates with it. After a while I when I wanted to boot to windows it showed that I was missing a "hal.dll" file. I've fixed this problem using the windows resource CD but then after booting up my PC it went straight to Windows XP. I've tried to manually reinstall Grub2 using a Live CD/USB and it worked but I think I have installed in on a different "sda#" (sda5 to be exact) because even though Grub2 loads when I boot my PC, only windows XP shows up as my OS and Ubuntu 11.10 is missing. Now, I've tried installing boot-repair to solve my problems using Live CD/USB. Boot-repair tells me that boot configuration was successful but then a basic grub interface shows up (the black one with a command line grub showing up. Now I can't even boot to Windows XP. Any help would be really appreciated. BTW here's the notes from boot repair that I was asked to save: http://paste.ubuntu.com/890228/ As you can see there are boot files on sda5 and sda7. I think that's the core problem that I have right now. Thanks in advance!

    Read the article

  • SBS 2008 reinstall on new machine

    - by Jon
    We purchased a single license on Windows Small Business Server 2008 for a medical office, and soon after a power fault caused the server to overload, destroying the power supply and motherboard. I reinstalled on a new server, and have the domain up and running again, but under system properties it tells me I still need to activate windows, and that the product key currently entered is invalid for activation. I've been trying to figure out how to get in touch with Microsoft to explain the situation and get a new activation key issued without having to pay for another license which we can't afford, but the online documentation hasn't been particularly clear on the subject. Does anyone know how to do that, or if I do need the activation key to continue using SBS 2008? (It currently tells me I have 56 days left to activate) Any tips would be appreciated. I'm not strictly a computer guy and I'm feeling a little lost.

    Read the article

  • Dealing with Fine-Grained Cache Entries in Coherence

    - by jpurdy
    On occasion we have seen significant memory overhead when using very small cache entries. Consider the case where there is a small key (say a synthetic key stored in a long) and a small value (perhaps a number or short string). With most backing maps, each cache entry will require an instance of Map.Entry, and in the case of a LocalCache backing map (used for expiry and eviction), there is additional metadata stored (such as last access time). Given the size of this data (usually a few dozen bytes) and the granularity of Java memory allocation (often a minimum of 32 bytes per object, depending on the specific JVM implementation), it is easily possible to end up with the case where the cache entry appears to be a couple dozen bytes but ends up occupying several hundred bytes of actual heap, resulting in anywhere from a 5x to 10x increase in stated memory requirements. In most cases, this increase applies to only a few small NamedCaches, and is inconsequential -- but in some cases it might apply to one or more very large NamedCaches, in which case it may dominate memory sizing calculations. Ultimately, the requirement is to avoid the per-entry overhead, which can be done either at the application level by grouping multiple logical entries into single cache entries, or at the backing map level, again by combining multiple entries into a smaller number of larger heap objects. At the application level, it may be possible to combine objects based on parent-child or sibling relationships (basically the same requirements that would apply to using partition affinity). If there is no natural relationship, it may still be possible to combine objects, effectively using a Coherence NamedCache as a "map of maps". This forces the application to first find a collection of objects (by performing a partial hash) and then to look within that collection for the desired object. This is most naturally implemented as a collection of entry processors to avoid pulling unnecessary data back to the client (and also to encapsulate that logic within a service layer). At the backing map level, the NIO storage option keeps keys on heap, and so has limited benefit for this situation. The Elastic Data features of Coherence naturally combine entries into larger heap objects, with the caveat that only data -- and not indexes -- can be stored in Elastic Data.

    Read the article

  • Octrees and Vertex Buffer Objects

    - by sharethis
    As many others I want to code a game with a voxel based terrain. The data is represented by voxels which are rendered using triangles. I head of two different approaches and want to combine them. First, I could once divide the space in chunks of a fixed size like many games do. What I could do now is to generate a polygon shape for each chunk and store that in a vertex buffer object (vbo). Each time a voxel changes, the polygon and vbo of its chunk is recreated. Additionally it is easy to dynamically load and reload parts of the terrain. Another approach would be to use octrees and divide the space in eight cubes which are divided again and again. So I could efficiently render the terrain because I don't have to go deeper in a solid cube and can draw that as a single one (with a repeated texture). What I like to use for my game is an octree datastructure. But I can't imagine how to use vbos with that. How is that done, or is this impossible?

    Read the article

  • Enterprise Portal Issue with the Ax Demo VPCs

    - by ssmantha
    Microsoft’s Ax Demo VPC is basically configured for a static IP address 192.168.0.1, this is due to the fact that the VPC has Domain Controlller configured in it which requires a static IP. When we put this VPC on a network with a different subnet and change the IP you can observer that the site http://sharepoint and http://sharepoint/EP cease to function and show “Page Not Found” errors in the browser. This is mainly due to the DNS configuration which is not updated. Below is the screen shot of the changes that needs to be done to make the site functioning properly. Change the following entries in the Forward Lookup Zones of DNS management: These websites default, SharePoint and projectserver are all mapped to a single port in the IIS i.e. port number 80. These websites are recognised with host headers. These host headers are configured in DNS with incorrect IP address entries in DNS when you change the IP address of the VPC. Just change these values to point to the Local Loop Adapter (127.0.0.1) and change the DNS to point to this address in the TCP/IP properties as shown below: This will resolve the issue with the website rendering. Initially you may get time out errors while browsing these website. be patient and try again this would work.

    Read the article

  • Deleting a partition on dual boot

    - by blade4
    I have split my C:\ physical disk into 3 seperate partitions: C, H, and K. C has my original OS and is the active partition, H has another OS (dual boot), and K has song files. I want to keep the other OS (Windows Server) and when I am in this OS, I can't delete C ("Windows cannot format this partition"). What is the best way for me to get rid of C, keep all the songs intact, and the Server OS partition, so I replace C with another OS, and then get rid of the original Windows Server OS (So revert the dual boot to a single boot).

    Read the article

  • Xterm is not completely erasing field lines

    - by user26367
    We have a SSH tunnel to a remote unix box from Windows clients using Cygwin. It launches a terminal program from the unix box locally on the Windows box for data input. The xterm window is launched as follows xterm -fn 10x20 -bg DodgerBlue4 -fg white -cr white -ls -geometry 90x30 -e program When a screen goes from read only mode to edit mode, the edit fields have ____. When going back to read only mode, a single pixel artifact is left behind for each field. *readonly* User: *edit* User: ___________ *after edit exit* User: . <- this dot is left behind Any idea what we need to change to fix this?

    Read the article

  • Multiple .bkf files created in Backupexec 12.5 or 2010 related to heavy I/O?

    - by syuusuke
    Hey everyone, I was wondering if anyone who has used backupexec 12.5 or 2010 have ever experienced multiple .bkf files created for a single job. To describe what I mean by multiple files, the .bkf are being created with random file sizes under 2GB even though I've assigned the setting to chop the file after 10GB size. Some jobs will create 20x .bkf files in 1 job with file chunks ranging from 50MB to 800MB sizes. Is this is a sign of heavy I/O issues? Bandwidth limitations? I'm not sure, I'm here to seek some advices and suggestions. I've setup another backup server with the same exact settings and they seem to create a new .bkf file when 10GB limit has been reached. Although I am backing up different machines but I know my settings are an exact match to the problematic or atleast I think it's a problem.

    Read the article

  • Can I create a virtual network interface to connect to a real network device?

    - by michelemarcon
    I have a networked windows pc with 2 network interfaces. The first connects to a lan with ip address 10.1.. The second connects to another lan with ip address 10.2.. Maybe it's a dumb question, however is it possible to virtualize the second network interface, so that the pc can connect to the 2 lans? If necessary, I may switch to linux or paravirtualization. CLARIFICATION: I want to send DHCP broadcast packets on the second lan, but not on the first lan. I want to do it with one single physical network interface. At the moment, I'm not using any virtualization software.

    Read the article

  • Two views of Federation: inside out, and outside in

    - by Darin Pendergraft
    IDM customers that I speak to have spent a lot of time thinking about enterprise SSO - asking your employees to log in to multiple systems, each with distinct hard to guess (translation: hard to remember) passwords that fit the corporate security policy for length and complexity is a strategy that is just begging for a lot of help-desk password reset calls. So forward thinking organizations have implemented SSO for as many systems as possible. With the mix of Enterprise Apps moving to the cloud, it makes sense to continue this SSO strategy by Federating with those cloud apps and services.  Organizations maintain control, since employee access to the externally hosted apps is provided via the enterprise account.  If the employee leaves, their access to the cloud app is terminated when their enterprise account is disabled.  The employees don't have to remember another username and password - so life is good. From the outside in - I am excited about the increasing use of Social Sign-on - or BYOI (Bring your own Identity).  The convenience of single-sign on is extended to customers/users/prospects when organizations enable access to business services using a social ID.  The last thing I want when visiting a website or blog is to create another account.  So using my Google or Twitter ID is a very nice quick way to get access without having to go through a registration process that creates another username/password that I have to try to remember. The convenience of not having to maintain multiple passwords is obvious, whether you are an employee or customer - and the security benefit of not having lots of passwords to lose or forget is there as well. Are enterprises allowing employees to use their personal (social) IDs for enterprise apps?  Not yet, but we are moving in the right direction, and we will get there some day.

    Read the article

  • Can't find new.h - getting gcc-4.2 on Quantal?

    - by Suyo
    I've been trying to compile the Valve Source SDK (2007) on my machine, but I keep running into the same error: In file included from ../public/tier1/interface.h:50:0, from ../utils/serverplugin_sample/serverplugin_empty.cpp:13: ../public/tier0/platform.h:46:17: new.h: No such file or directory I'm pretty new to C++ coding and compiling, but using apt-file search I tried to use every single suggestion for the required files in the Makefile (libstdc++.a and libgcc_eh.a), and none worked. I then found a note in the Makefile saying gcc 4.2.2 is recommended - I assume the older code won't work with the newer version, but gcc-4.2 is unavailable in 12.10. So my question/s is/are: If my assumption is right - how do I get gcc 4.2.2 on Quantal? If my assumption is wrong - what else could be the problem here? Relevant portion of the Makefile: # compiler options (gcc 3.4.1 will work - 4.2.2 recommended) CC=/usr/bin/gcc CPLUS=/usr/bin/g++ CLINK=/usr/bin/gcc CPP_LIB="/usr/lib/gcc/x86_64-w64-mingw32/4.6/libstdc++.a /usr/lib/gcc/x86_64-w64-mingw32/4.6/libgcc_eh.a" # GCC 4.2.2 optimization flags, if you're using anything below, don't use these! OPTFLAGS=-O1 -fomit-frame-pointer -ffast-math -fforce-addr -funroll-loops -fthread-jumps -fcrossjumping -foptimize-sibling-calls -fcse-follow-jumps -fcse-skip-blocks -fgcse -fgcse-lm -fexpensive-optimizations -frerun-cse-after-loop -fcaller-saves -fpeephole2 -fschedule-insns2 -fsched-interblock -fsched-spec -fregmove -fstrict-overflow -fdelete-null-pointer-checks -freorder-blocks -freorder-functions -falign-functions -falign-jumps -falign-loops -falign-labels -ftree-vrp -ftree-pre -finline-functions -funswitch-loops -fgcse-after-reload #OPTFLAGS= # put any compiler flags you want passed here USER_CFLAGS=-m32

    Read the article

  • Ubuntu 11.10 loads from live usb fine, but boots to black screen from harddrive. Why?

    - by Estel
    A few days ago I had a hard drive failure, which was running Windows XP (32-bit) just fine. The second hard drive in my computer held a few unimportant files, so I formatted it in the Ubuntu setup and installed 11.10 without a hitch. I had been using it for about a week, but decided to install Windows 7 (64-bit) in order to utilize Networking with my home server (running Windows Server 2000). My system is 64-bit based, and thus I had no problems installing other than a basic RAM error that required me to remove my RAM down to a single stick. I played with the settings in Windows 7 for around an hour before I shut down. After reinstalling the RAM, Windows 7 would not boot. In this, I then assumed that something about my system was rejecting Win7 and I reinstalled Ubuntu. However, now Ubuntu (11.10) boots into black screen, and I've already attempted activating the grub menu with the shift key, and following steps listed here: https://wiki.ubuntu.com/X/Troubleshooting/BlankScreen but nothing seems to work. I've reinstalled twice now, with the same result each time. Now, the very odd part about this whole scenario is that the USB I installed from has no problems booting as a live USB. This puzzles me greatly, because the hard drive boots straight to black screen and the live USB loads normally. At this point, my only theory is that the boot sector of the hard disk was somehow corrupted with Win7, and that Ubuntu was unable to completely write through. I used Darik's Boot n Nuke to wipe the drive, but was met with an error, this also puzzles me because the hard disk has no promblems reading or writing. Any suggestions/comments are appreciated. If you have a theory, I will be more than happy to oblige. Additional information: Intel Core2 Duo e6400 2.13GHz nVidia GeForce 7-series (7900 GS) 4 GB DDR2 333MHz (2x 2GB) Dell XPS 410 BIOS Revision 2.5.3

    Read the article

< Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >