Search Results

Search found 21028 results on 842 pages for 'single player'.

Page 513/842 | < Previous Page | 509 510 511 512 513 514 515 516 517 518 519 520  | Next Page >

  • How to stream multiple files on demand in VLC?

    - by romkyns
    Is there any way at all that I can set up VLC on a server PC in such a way that I can access a list of all my videos from another PC, and pick one to be streamed on demand? I've been pointed at this streaming guide (pdf), but it's pretty useless. For a start, most of the menus in those screenshots don't match the actual current version VLC, and then it sort of assumes you already know what you're doing. So far I managed to figure out how to stream a single file, which I must choose before watching on the server PC - pretty useless if you ask me! The impenetrable "UI" doesn't help either... (P.S. The reason I'm going for streaming rather than the very simple to set up network drive is described in this question)

    Read the article

  • How can # of unique visitors be more than # of visits?

    - by Dallas
    I am confused as heck. When looking at a few individual pages, I am seeing weird results I hope someone can help explain. I am doing manual standard reports, and not creating or using a widget. For each of the two tests below, the metrics I am using are visits (not visitors) and unique visitors. I am using Page as my primary dimension and Page Title as secondary. I am filtering to include certain Pages. Example 1 - Looking at a single page... I see 731 unique visitors and 169 visits. How is this possible? Does google just flip them around for some reason? Example 2 - Looking at several pages combined... If you examine the timeline below , you can see that the numbers are all over the place. How is it possible to have more unique visitors than visits? What am I missing? I would suspect that if things were just flipped around, then I should still see one line that is always below the other line. Anyone clue me in to what may be happening here? I also notice that the visits column (just out of view) adds up to the total, but the unique visitors column clearly doesn't.

    Read the article

  • Is there a chroot build script somewhere?

    - by Nils
    I am about to develop a little script to gather information for a chroot-jail. In my case this looks (at the first glance) pretty simple: The application has a clean rpm-install and did install almost all files into a sub-directory of /opt. My idea is: Do a find of all binaries Check their library-dependencies Record the results into a list Do a rsync of that list into the chroot-target-directory before startup of the application Now I wonder - ist there any script around that already does such a job (perl/bash/python)? So far I found only specialized solutions for single applications (like sftp-chroot). Update I see three close-votes for the reason "off topic". This is a question that arose because I have to install that ancient piece of software on a server at work. So if you still feel this is off-topic - leave a comment...

    Read the article

  • Speaking at Sinergija12

    - by DigiMortal
    Next week I will be speaker at Sinergija12, the biggest Microsoft conference held in Serbia. The first time I visited Sinergija it was clear to me that this is the event where I should go back. Why? Because technical level of sessions was very well in place and actually sessions I visited were pretty hardcore. Now, two years later, I will be back there but this time I’m there as speaker. My session at Sinergija12 Here are my three almost finished sessions for Sinergija12. ASP.NET MVC 4 Overview Session focuses on new features of ASP.NET MVC 4 and gives the audience good overview about what is coming. Demos cover all important new features - agent based output, new application templates, Web API and Single Page Applications. This session is for everybody who plans to move to ASP.NET MVC 4 or who plans to start building modern web sites.   Building SharePoint Online applications using Napa Office 365 Next version of Office365 allows you to build SharePoint applications using browser based IDE hosted in cloud. This session introduces new tools and shows through practical examples how to build online applications for SharePoint 2013.   Cloud-enabling ASP.NET MVC applications Cloud era is here and over next years more and more web applications will be hosted on cloud environments. Also some of our current web applications will be moved to cloud. This session shows to audience how to change the architecture of ASP.NET web application so it runs on shared hosting and Windows Azure with same code base. Also the audience will see how to debug and deploy web applications to Windows Azure. All developers who are coming to Sinergija12 are welcome to my sessions. See you there! :)

    Read the article

  • get all ip address from subnet mask

    - by Guntis
    I have this IP list shown below. How i can calculate all ip addresses from that in Linux? Is there some tools that can calculate that for me ? I need that to check if i have not banned some cloudflare IP's. As firewall i am using shorewall and i am banning with fail2ban single IP. As i know, then i cannot detect subent mask from IP adress, right? 204.93.240.0/24 204.93.177.0/24 199.27.128.0/21 173.245.48.0/20 103.22.200.0/22 141.101.64.0/18 108.162.192.0/18 190.93.240.0/20 188.114.96.0/20

    Read the article

  • Is IDirectInput8::FindDevice totally broken on Windows 7?

    - by Noora
    I'm developing on Windows 7, and using DirectInput8 for my input needs. I'm tracking gamepad additions and removals (that is, GUID_DEVINTERFACE_HID devices) using the DBT_DEVICEARRIVAL and DBT_DEVICEREMOVECOMPLETE messages, which works fine. However, what I've come to find out is that no matter what I do, passing the received values from DBT_DEVICEARRIVAL to IDirectInput8's FindDevice method, it will always fail to identify the device, returning DIERR_DEVICENOTREG. DirectInput still clearly knows about the device, because I can enumerate and create it just fine. I've tried with three different gamepads, and the error persists, so it's not about that either. I also tried passing some alternative interface GUIDs for the RegisterDeviceNotification call, didn't help. So, has anyone else faced the same problem, and have you found a usable workaround? I'm afraid I'll soon have to stoop down to re-enumerating all devices when something is added or removed, but I'll first give this question one last shot here. EDIT: For the record, I've also tried pretty much every single HID API & SetupAPI function for alternative ways of figuring out the needed GUIDs, with zero success. So if you're facing the same problem as me, don't bother with that route. I'm pretty sure those GUIDs are made up by DirectInput itself somehow. Short of reverse engineering dinput8.dll, I'm truly out of ideas now.

    Read the article

  • ESXI with non standard hardware HDD issues

    - by Hurricanepkt
    I have 3 very underutilized servers that I am condensing to one of those shuttle PC's with VMWare ESXi The HDD seems to be the bottle neck right now (the light is almost always pure solid) right now I have a single 1TB Seagate 7200.11 connected by SATA. VMWare ESXi cannot detect it when running in AHCI mode, but does when running in IDE mode. I have read that IDE mode can give a 5% performance hit which might give me enough breathing room. However, I am open to setting up an external eSATA or some sort of raid to give me more than just the 5%. I am just weary of sinking some money into a bit of hardware without knowledge of whether it will work. Does anyone know of resources or procedures of how to get this working.

    Read the article

  • How SSD's drives reduce their latency?

    - by tigrou
    First time i read some information about SSD's, i was surprised to learn they internally use NAND flash chips. This kind of memory is generally slow (low bandwidth) and have high latency while SSD's are just the opposite. But here is how it works : SSD drives increase their bandwidth by using several NAND flash chips in parallel. In other words, they do some data striping (aka RAID0) across several chips (done by the controller). What i don't understand is how SSD's drives managed to reduce latency? (or at least lot better than what a single NAND chip without any controller can do)

    Read the article

  • iptables: separate clients from each other

    - by Florian Lagg
    Hello, is there a way to separate clients in a subnet so that they cannot reach each other? The infrastructure currently looks like this: 192.168.0.1/24 Gateway, a CentOS box with iptables. 192.168.0.10-20 Some clients which may reach each other 192.168.0.30 A single client which should not be able to reach the hosts 192.168.0.10-20 should be able to reach the gateway and the internet I don't know if it is possible, maybe you could give me your ideas how it could be done. I cannot influence the machine 192.168.0.30 because it is a virtual machine I want to rent to someone. Thanks.

    Read the article

  • I don't understand how TDD helps me get a good design if I need a design to start testing it

    - by Michael Stum
    I'm trying to wrap my head around TDD, specifically the development part. I've looked at some books, but the ones I found mainly tackle the testing part - the History of NUnit, why testing is good, Red/Green/Refactor and how to create a String Calculator. Good stuff, but that's "just" Unit Testing, not TDD. Specifically, I don't understand how TDD helps me get a good design if I need a Design to start testing it. To illustrate, imagine these 3 requirements: A catalog needs to have a list of products The catalog should remember which products a user viewed Users should be able to search for a product At this points, many books pull a magic rabbit out of a hat and just dive into "Testing the ProductService", but they don't explain how they came to the conclusion that there is a ProductService in the first place. That is the "Development" part in TDD that I'm trying to understand. There needs to be an existing design, but stuff outside of entity-services (that is: There is a Product, so there should be a ProductService) is nowhere to be found (e.g., the second requirement requires me to have some concept of a User, but where would I put the functionality to remind? And is Search a feature of the ProductService or a separate SearchService? How would I know which I should choose?) According to SOLID, I would need a UserService, but if I design a system without TDD, I might end up with a whole bunch of Single-Method Services. Isn't TDD intended to make me discover my design in the first place? I'm a .net developer, but Java resources would also work. I feel that there doesn't seem to be a real sample application or book that deals with a real line of business application. Can someone provide a clear example that illustrates the process of creating a design using TDD?

    Read the article

  • Deleted info in Boot folder

    - by user207984
    First off, I'm using Zorin 7 OS. So my Boot folder was too full to install any new updates, I used a tutorial I found somewhere on here to remove the unneeded linux-image files, and must of also deleted the latest one as well. Now when I attempt to boot I get error: no such partition. grub rescue> I used my MultiSystem USB to install (on a separate partition) a different Linux OS (Kali) and no longer get that error, however, it will ONLY give me the option to boot Kali Linux. Here's the biggest new problem though, I used the built in option of hard drive encryption for Zorin 7 when I initially installed it, so now when I attempt to explore it (to get all my saved data which is REALLY important to me), it asks me for password for encryption. However, the password says it in not recognized, and I know it's right, I had to type it in every single day. So I either need a way to restore my Zorin 7 boot files or GRUB or whatever, so I can boot it up... or I need to know how to fix my encryption problem to save all my info.

    Read the article

  • If a change the CPU, must I reinstall the OS?

    - by dag729
    Hi, as suggested by the title, I want to change CPU: actually I have two computers, one with Ubuntu running on an AMD Athlon 64 dual core 5200+ and the other with FreeBSD running on an AMD Sempron single core LE-1250. I would like to swap (I am not sure that this is the correct term...) the CPUs from one computer to the other one, that is take the dual core from the ubuntu pc and put it inside the freebsd pc and viceversa. The mobo is the same. Do you think I will encounter problems?

    Read the article

  • So what is Active GridLink for RAC?

    - by Ruma Sanyal
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 I had referred to Active GridLink for RAC in my blog yesterday and since then got several questions on this topic. So I decided to re-visit Active GridLink. With the release of version 11g, Oracle WebLogic Server started to provide strong support for the Real Application Clusters (RAC) features in Oracle Database 11g, minimizing database access time while allowing transparent access to rich pooling management functions that maximizes both connection performance and availability. WebLogic is the only application server in the marketplace which has been fully integrated and certified with Oracle Database RAC 11g without losing any rich functionality. Active GridLink provides Fast Connection Failover (FCF), Runtime Connection Load-Balancing (RCLB), and RAC instance graceful shutdown. With the key foundation for providing deeper integration with Oracle RAC, this single data source implementation in Oracle WebLogic Server supports the full and unrestricted use of database services as the connection target for a data source. For more details and to understand how our customer NEC leverages this capability, read the whitepapers on this topic. Get in depth ‘how-to’ details from this youtube video from our resident expert, Frances Zhao.

    Read the article

  • Use Ultracopier / Teracopy with Ditto Clipboard Manager

    - by drrtyrokka
    I really like the tool Ultracopier because of the copy acceleration and additional details on Windows. Also the ability to pause the copy is really helpful. Additionally I want to use a clipboard manager such as Ditto. The problem here is, when I run Ditto, my system won't use the Ultracopy tools for copiying anything It uses the windows default. How can I use the advantages of both tools? Is there any similar tool (perhaps a single tool) which gives me these options?

    Read the article

  • Algorithms for Data Redundancy and Failover for distributed storage system?

    - by kennetham
    I'm building a distributed storage system that works with different storage sizes. For instance, my storage devices have sizes of 50GB, 70GB, 150GB, 250GB, 1000GB, 5 storage systems in one system. My application will store any files to the storage system. Question: How can I build a distributed storage with the idea of data redundancy and fail-over to store documents, videos, any type of files at the same time ensuring that should one of any storage devices fail, there would be another copy of these files on another storage device. However, the concern is, 50GB of storage can only store this maximum number of files as compared to 70GB, 150GB etc. With one storage in mind, bringing 5 storage systems like a cloud storage, is there any logical way to distribute or store the files through my application? How do I ensure data redundancy through different storage sizes? Is there any algorithm to collate multiple blob files into a single file archive? What is the best solution for one cloud storage with multiple different storage sizes? I open this topic with the objective of discussing the best way to implement this idea, assuming simplicity, what are the issues of this implementation, performance measurements and discussion of the limitations.

    Read the article

  • When adding second processor to SQL Server, will it automatically balance the load?

    - by ddavis
    We have a SQL Server 2008 R2 (10.5) on a dedicated box with a single 2.4Ghz processor, which regularly runs at 70-80% CPU. We are going to be adding a significant number of users to the application and therefore want to add a second processor to the box (scale up). Will SQL Server automatically use the second processor to balance threads, or is there additional configuration that will need to be done? In other words, will adding the second processor drop my CPU usage to 35-40% per CPU, automatically balancing the load? Based on what I read here, it seems that it will: http://msdn.microsoft.com/en-us/library/ms181007.aspx However, I've read elsewhere that CPU performance gains can be made by assigning database tables to different filegroups, but I'm not sure we want to get that complicated at this point.

    Read the article

  • What kind of permission is this? (Groups+Roles)

    - by Jorge
    I'm starting to need an access control for roles in my app. I don't know much of this, but I understand how vBulletin works: I create groups, then give permissions to groups. I think that what I need is the Role Bases Access Control (RBAC) , but i'm not sure, because I need groups to give permissions instead of single users (Maybe it's not that complicated to achieve). Example of what I'm thinking: Given a post: Editor's Group has permission to view it before it's published. Editor's Group has permission to edit its content. Public Group (Default) has not permission to view it before it's published. Admin Group has permission to delete the post. So basically I wan't orientation about if RBAC is what I need. And also, how would it be good to store group membership in a user, for example, would be good to have: ID NAME PASSWORD GROUPS (1, MyName, MyPassword, 1/2/3/4/5) and explode it via PHP or one registry for every Group membership in a table named permissions, example: USERID, USERGROUP values (1, 1), (1, 2) Maybe should be the second way because of the formal norms but I didn't study yet Databases 1 at college.

    Read the article

  • New whitepaper: Evolution from the Traditional Data Center to Exalogic: An Operational Perspective

    - by Javier Puerta
    IT organizations are struggling with the need to balance the day-to-day concerns of data center management against the business level requirements to deliver long-term value. This balancing act has proven difficult and inefficient: systems and application management tools are resource intensive and traditional infrastructure management architectures have developed over time on a project by project basis. These traditional management systems consist of multiple tools that require administrators to waste time performing too many steps to handle routine administrative tasks. Operational efficiency and agility in your enterprise are directly linked to the capabilities provided by the management layer across the entire stack, from the application, middleware, operating system, compute, network and storage. Only when this end to end capability is provided will we experience the full benefit of a scalable, efficient, responsive and secure datacenter. Managing Exalogic is substantially less complex and error prone than managing traditional systems built from individually sourced, multi-vendor components because Exalogic is designed to be administered and maintained as a single, integrated system (Figure 1). It is at the forefront of the industry-wide shift away from costly and inferior one-off platforms toward private clouds and Engineered Systems. Read the full whitepaper "Evolution from the Traditional Data Center to Exalogic: An Operational Perspective". Full document is available for download at the Exadata Partner Community Collaborative Workspace (for community members only - if you get an error message, please register for the Community first).

    Read the article

  • New Whitepaper: Evolution from the Traditional Data Center to Exalogic: An Operational Perspective

    - by Javier Puerta
    IT organizations are struggling with the need to balance the day-to-day concerns of data center management against the business level requirements to deliver long-term value. This balancing act has proven difficult and inefficient: systems and application management tools are resource intensive and traditional infrastructure management architectures have developed over time on a project by project basis. These traditional management systems consist of multiple tools that require administrators to waste time performing too many steps to handle routine administrative tasks. Operational efficiency and agility in your enterprise are directly linked to the capabilities provided by the management layer across the entire stack, from the application, middleware, operating system, compute, network and storage. Only when this end to end capability is provided will we experience the full benefit of a scalable, efficient, responsive and secure datacenter. Managing Exalogic is substantially less complex and error prone than managing traditional systems built from individually sourced, multi-vendor components because Exalogic is designed to be administered and maintained as a single, integrated system (Figure 1). It is at the forefront of the industry-wide shift away from costly and inferior one-off platforms toward private clouds and Engineered Systems. Read the full whitepaper "Evolution from the Traditional Data Center to Exalogic: An Operational Perspective". Full document is available for download at the Exadata Partner Community Collaborative Workspace (for community members only - if you get an error message, please register for the Community first).

    Read the article

  • Higher Performance With Spritesheets Than With Rotating Using C# and XNA 4.0?

    - by Manuel Maier
    I would like to know what the performance difference is between using multiple sprites in one file (sprite sheets) to draw a game-character being able to face in 4 directions and using one sprite per file but rotating that character to my needs. I am aware that the sprite sheet method restricts the character to only be able to look into predefined directions, whereas the rotation method would give the character the freedom of "looking everywhere". Here's an example of what I am doing: Single Sprite Method Assuming I have a 64x64 texture that points north. So I do the following if I wanted it to point east: spriteBatch.Draw( _sampleTexture, new Rectangle(200, 100, 64, 64), null, Color.White, (float)(Math.PI / 2), Vector2.Zero, SpriteEffects.None, 0); Multiple Sprite Method Now I got a sprite sheet (128x128) where the top-left 64x64 section contains a sprite pointing north, top-right 64x64 section points east, and so forth. And to make it point east, i do the following: spriteBatch.Draw( _sampleSpritesheet, new Rectangle(400, 100, 64, 64), new Rectangle(64, 0, 64, 64), Color.White); So which of these methods is using less CPU-time and what are the pro's and con's? Is .NET/XNA optimizing this in any way (e.g. it notices that the same call was done last frame and then just uses an already rendered/rotated image thats still in memory)?

    Read the article

  • IDirect3DDevice9::GetRenderTargetData() returns no data

    - by P. Avery
    I've got a simple function to get the rendertarget data of an RT( w/default pool ). This particular RT has a resolution of 1x1( it's the 10'th and final mip of a texture ). Here is my code to get data for IDirect3DSurface9 *pTargetSurface: IDirect3DSurface9 *pSOS = NULL; pd3dDevice->CreateOffScreenPlainSurface( 1, 1, D3DFMT_A8R8G8B8, D3DPOOL_SYSTEMMEM, &pSOS, NULL ); // get residual energy if( FAILED( hr = pd3dDevice->GetRenderTargetData( pTargetSurface, pSOS ) ) ) { DebugStringDX( ClassName, "Failed to IDirect3DDevice9::GetRenderTargetData() at DownsampleArea()", __LINE__, hr ); goto Exit; } // lock surface if( FAILED( hr = pSOS->LockRect( &rct, NULL, D3DLOCK_READONLY ) ) ) { DebugStringDX( ClassName, "Failed to IDirect3DSurface9::LockRect() at DownsampleArea()", __LINE__, hr ); goto Exit; } // get residual energy from downsampled texture pByte = ( BYTE* )rct.pBits; D3DXVECTOR4 vEnergy; vEnergy.z = ( float )pByte[ 0 ] / 255.0f; vEnergy.y = ( float )pByte[ 1 ] / 255.0f; vEnergy.x = ( float )pByte[ 2 ] / 255.0f; vEnergy.w = ( float )pByte[ 3 ] / 255.0f; V( pSOS->UnlockRect() ); All formatting and settings are correct, directx in debug mode shows no errors... The problem is that the 4 bytes above are 0...I know this to be incorrect by using PIX to debug...PIX shows that RGB bytes are 0.078 and Alpah is 1. These values are not less than that which can be represented by a single byte( 1 / 255 ). Any ideas? Am I copying rendertarget data correctly?

    Read the article

  • Starting VMs with an executable with as low overhead as possible

    - by Robert Koritnik
    Is there a solution to create a virtual machine and start it by having an executable file, that will start the machine? If possible to start as quickly as possible. Strange situation? Not at all. Read on... Real life scenario Since we can't have domain controller on a non-server OS it would be nice to have domain controller in an as thin as possible machine (possibly Samba or similar because we'd like to make it startup as quickly as possible - in a matter of a few seconds) packed in a single executable. We could then configure our non-server OS to run the executable when it starts and before user logs in. This would make it possible to login into a domain.

    Read the article

  • Rendering multiple squares fast?

    - by Sam
    so I'm doing my first steps with openGL development on android and I'm kinda stuck at some serious performance issues... What I'm trying to do is render a whole grid of single colored squares on to the screen and I'm getting framerates of ~7FPS. The squares are 9px in size right now with one pixel border in between, so I get a few thousand of them. I have a class "Square" and the Renderer iterates over all Squares every frame and calls the draw() method of each (just the iteration is fast enough, with no openGL code the whole thing runs smootlhy at 60FPS). Right now the draw() method looks like this: // Prepare the square coordinate data GLES20.glVertexAttribPointer(mPositionHandle, COORDS_PER_VERTEX, GLES20.GL_FLOAT, false, vertexStride, vertexBuffer); // Set color for drawing the square GLES20.glUniform4fv(mColorHandle, 1, color, 0); // Draw the square GLES20.glDrawElements(GLES20.GL_TRIANGLES, drawOrder.length, GLES20.GL_UNSIGNED_SHORT, drawListBuffer); So its actually only 3 openGL calls. Everything else (loading shaders, filling buffers, getting appropriate handles, etc.) is done in the Constructor and things like the Program and the handles are also static attributes. What am I missing here, why is it rendering so slow? I've also tried loading the buffer data into VBOs, but this is actually slower... Maybe I did something wrong though. Any help greatly appreciated! :)

    Read the article

  • Which version management design methodology to be used in a Dependent System nodes?

    - by actiononmail
    This is my first question so please indicate if my question is too vague and not understandable. My question is more related to High Level Design. We have a system (specifically an ATCA Chassis) configured in a Star Topology, having Master Node (MN) and other sub-ordinate nodes(SN). All nodes are connected via Ethernet and shall run on Linux OS with other proprietary applications. I have to build a recovery Framework Design so that any software entity, whether its Linux, Ramdisk or application can be rollback to previous good versions if something bad happens. Thus I think of maintaining a State Version Matrix over MN, where each State(1,2....n) represents Good Kernel, Ramdisk and application versions for each SN. It may happen that one SN version can dependent on other SN's version. Please see following diagram:- So I am in dilemma whether to use Package Management Methodology used by Debian Distributions (Like Ubuntu) or GIT repository methodology; in order to do a Rollback to previous good versions on either one SN or on all the dependent SNs. The method should also be easier for upgrading SNs along with MNs. Some of the features which I am trying to achieve:- 1) Upgrade of even single software entity is achievable without hindering others. 2) Dependency checks must be done before applying rollback or upgrade on each of the SN 3) User Prompt should be given in case dependency fails.If User still go for rollback, all the SNs should get notification to rollback there own releases (if required). 4) The binaries should be distributed on SNs accordingly so that recovery process is faster; rather fetching every time from MN. 5) Release Patches from developer for bug fixes, feature enhancement can be applied on running system. 6) Each version can be easily tracked and distinguishable. Thanks

    Read the article

  • Motivation for service layer (instead of just copying dlls)?

    - by BornToCode
    I'm creating an application which has 2 different UIs so I'm making it with a service layer which I understood is appropriate for such case. However I found myself just creating web methods for every single method I have in the BL layer, so the services basically built from methods that looks like this: return customers_bl.Get_Customer_Prices(customer_id); I understood that a main point of the service layer is to prevent duplication of code so I asked myself - well, why not just import the BL.dll (and the DAL.dll) to the other UI, and whenever making a change re-copy the dll files, it might not be so 'neat', but is the all purpose of the service layer to prevent this? {I know something is wrong in my approach, I'm probably missing the importance of service layer, I'd like to get more motivation to create another layer, especially because as it is I found that many of my BL functions ALREADY looks like: return customers_dal.Get_Customer_Prices(cust_id) which led me to ask: was it really necessary to create the BL just because on several functions I actually have LOGIC inside the BL?} so I'm looking for more motivation to creating ONE MORE layer, I'm sure it's not just to make it more convenient that I won't have to re-copy the dlls on changes? Am I grasping it wrong? Any simple guidelines on how to design service layer (corresponding to all the BL layer functions or not? any simple example?) any enlightenment on the subject?

    Read the article

< Previous Page | 509 510 511 512 513 514 515 516 517 518 519 520  | Next Page >