Search Results

Search found 11825 results on 473 pages for 'live stream'.

Page 79/473 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • Force save files all browsers - not open in browser window

    - by Joshc
    I'm after a simple solution to work in all browsers. For specific file types, or targeted links via a class: how can I get them to simply force download in all major browsers. I thought I found the perfect solution for apachce server - by adding this into the .htaccess. http://css-tricks.com/snippets/htaccess/force-files-to-download-not-open-in-browser/ AddType application/octet-stream .csv AddType application/octet-stream .xls AddType application/octet-stream .doc AddType application/octet-stream .avi AddType application/octet-stream .mpg AddType application/octet-stream .mov AddType application/octet-stream .pdf Seems to work in Firefox and Safari, but not chrome or IE (have not tested anything else) Can any one please help me with a solution on how to make links to force download the file, instead of opening in the browser, for ALL browsers. I can't seem to find a full browser proof solution. Is it not possible? Any links to tutorial or snippets would be awesome. My website if PHP based so can make it work with PHP if posible. Thanks

    Read the article

  • How to reload different objects by applicationsettingsbase?

    - by younevertell
    // TestASettingsString and TestBSettingsString are byte[] // TestASettings and TestBSettings are two objects to be saved My question is how to recover TestASettings and TestBSettings from TestASettingsString and TestASettingsString seperately in loadsavedsettings? Thanks private void SettingsSaving(object sender, CancelEventArgs e) { try { var stream = new MemoryStream(); var formatter = new BinaryFormatter(); formatter.Serialize(stream, TestASettings); // TestASettingsString and TestBSettingsString are byte[] TestASettingsString = stream.ToArray(); stream.Flush(); formatter.Serialize(stream, TestBSettings); TestBSettingsString = stream.ToArray(); stream.Close(); } catch (Exception ex) { Debug.WriteLine(ex); } } private void LoadSavedSettings() { Reload(); // how to get TestASettings and TestBSettings from TestASettingsString and // TestASettingsString seperately? }

    Read the article

  • Adding Blog to Your Orchard Website

    - by hajan
    One of the common features in today’s content management systems is to provide you the ability to create your own blog in your website. Also, having a blog is one of the very often needed features for various types of websites. Out of the box, Orchard gives you this, so you can create your own blog in your Orchard website on a pretty easy way. Besides the fact that you can very easily create your own blog, Orchard also gives you some extra features in relation with the support of blogging, such as connecting third-party client applications (e.g. Windows Live Writer) to your blog, so that you can publish blog posts remotely. You can already find all the information provided in this blog post on the http://orchardproject.net website, however I thought it would be nice to make summary in one blog post. I assume you have already installed Orchard and you are already familiar with its environment and administration dashboard. If you haven’t, please read this blog post first.   CREATE YOUR BLOG First of all, go to Orchard Administration Dashboard and click on Blog in the left menu Once you are there, you will see the following screen   Fill the form with all needed data, as in the following example and click Save Right after, you should see the following screen Click New post, and add your first post. After that, go to Homepage (click Your Site in the top-left corner) and you should see the Blog link in your menu After clicking on Blog, you will be directed to the following page Once you click on My First Post, you will see that your blog already supports commenting ability (you can enable/disable this from Administration dashboard in your blog settings) Added comment Adding new comment Submit comment So, with following these steps, you have already setup your blog in your Orchard website.   CONNECT YOUR BLOG WITH WINDOWS LIVE WRITER Since many bloggers prepare their blog posts using third-party client applications, like Windows Live Writer, its very useful if your blog engine has the ability to work with these third-party applications and enable them to make remote posting and publishing. The client applications use XmlRpc interface in order to have the ability to manage and publish the blogs remotely. What is great about Orchard is that it gives you out of the box the XmlRpc and Remote Publishing modules. What you only need to do is to enable these features from the Modules in your Orchard Administration Dashboard. So, lets go through the steps of enabling and making your previously created blog able to work with third-party client applications for blogging. 1. Go to Administration Dashboard and click the Modules After clicking the Modules, you will see the following page: As you can see, you already have Remote Blog Publishing and XmlRpc features for Content Publishing, but both are disabled by default. So, if you click Enable only on Remote Blog Publishing, you will see both of them enabled at once since they are dependent features. After you click Enable, if everything is Ok, the following message should be displayed: So, now we have the featured enabled and ready... The next thing you need to do is to open Windows Live Writer. First, open Windows Live Writer and in your Blog Accounts, click on Add blog account In the next window, chose Other services After that, click on your Blog link in the Orchard website and copy the URL, my URL (on localhost development server) is: http://localhost:8191/blog Then, add your login credentials you use to login in Orchard and click Next. After that, if you have setup everything successfully, the Windows Live Writer will do the rest Once it finishes, you will have window where you can specify the name of your blog you have just connected your Windows Live Writer to... Then... you are done. You can see Windows Live Writer has detected the Orchard theme I am using After you finish with the blog post, click on Publish and refresh the Blog page in your Orchard website You see, we have the blog post directly posted from Windows Live Writer to my Orchard Blog. I hope this was useful blog post. Regards, Hajan Reference and other useful posts: Build incredible content-driven websites using Orchard CMS Create blog on your site with Orchard CMS Blogging using Windows Live Writer in your Orchard CMS Blog Orchard Website

    Read the article

  • Anatomy of a .NET Assembly - CLR metadata 1

    - by Simon Cooper
    Before we look at the bytes comprising the CLR-specific data inside an assembly, we first need to understand the logical format of the metadata (For this post I only be looking at simple pure-IL assemblies; mixed-mode assemblies & other things complicates things quite a bit). Metadata streams Most of the CLR-specific data inside an assembly is inside one of 5 streams, which are analogous to the sections in a PE file. The name of each section in a PE file starts with a ., and the name of each stream in the CLR metadata starts with a #. All but one of the streams are heaps, which store unstructured binary data. The predefined streams are: #~ Also called the metadata stream, this stream stores all the information on the types, methods, fields, properties and events in the assembly. Unlike the other streams, the metadata stream has predefined contents & structure. #Strings This heap is where all the namespace, type & member names are stored. It is referenced extensively from the #~ stream, as we'll be looking at later. #US Also known as the user string heap, this stream stores all the strings used in code directly. All the strings you embed in your source code end up in here. This stream is only referenced from method bodies. #GUID This heap exclusively stores GUIDs used throughout the assembly. #Blob This heap is for storing pure binary data - method signatures, generic instantiations, that sort of thing. Items inside the heaps (#Strings, #US, #GUID and #Blob) are indexed using a simple binary offset from the start of the heap. At that offset is a coded integer giving the length of that item, then the item's bytes immediately follow. The #GUID stream is slightly different, in that GUIDs are all 16 bytes long, so a length isn't required. Metadata tables The #~ stream contains all the assembly metadata. The metadata is organised into 45 tables, which are binary arrays of predefined structures containing information on various aspects of the metadata. Each entry in a table is called a row, and the rows are simply concatentated together in the file on disk. For example, each row in the TypeRef table contains: A reference to where the type is defined (most of the time, a row in the AssemblyRef table). An offset into the #Strings heap with the name of the type An offset into the #Strings heap with the namespace of the type. in that order. The important tables are (with their table number in hex): 0x2: TypeDef 0x4: FieldDef 0x6: MethodDef 0x14: EventDef 0x17: PropertyDef Contains basic information on all the types, fields, methods, events and properties defined in the assembly. 0x1: TypeRef The details of all the referenced types defined in other assemblies. 0xa: MemberRef The details of all the referenced members of types defined in other assemblies. 0x9: InterfaceImpl Links the types defined in the assembly with the interfaces that type implements. 0xc: CustomAttribute Contains information on all the attributes applied to elements in this assembly, from method parameters to the assembly itself. 0x18: MethodSemantics Links properties and events with the methods that comprise the get/set or add/remove methods of the property or method. 0x1b: TypeSpec 0x2b: MethodSpec These tables provide instantiations of generic types and methods for each usage within the assembly. There are several ways to reference a single row within a table. The simplest is to simply specify the 1-based row index (RID). The indexes are 1-based so a value of 0 can represent 'null'. In this case, which table the row index refers to is inferred from the context. If the table can't be determined from the context, then a particular row is specified using a token. This is a 4-byte value with the most significant byte specifying the table, and the other 3 specifying the 1-based RID within that table. This is generally how a metadata table row is referenced from the instruction stream in method bodies. The third way is to use a coded token, which we will look at in the next post. So, back to the bytes Now we've got a rough idea of how the metadata is logically arranged, we can now look at the bytes comprising the start of the CLR data within an assembly: The first 8 bytes of the .text section are used by the CLR loader stub. After that, the CLR-specific data starts with the CLI header. I've highlighted the important bytes in the diagram. In order, they are: The size of the header. As the header is a fixed size, this is always 0x48. The CLR major version. This is always 2, even for .NET 4 assemblies. The CLR minor version. This is always 5, even for .NET 4 assemblies, and seems to be ignored by the runtime. The RVA and size of the metadata header. In the diagram, the RVA 0x20e4 corresponds to the file offset 0x2e4 Various flags specifying if this assembly is pure-IL, whether it is strong name signed, and whether it should be run as 32-bit (this is how the CLR differentiates between x86 and AnyCPU assemblies). A token pointing to the entrypoint of the assembly. In this case, 06 (the last byte) refers to the MethodDef table, and 01 00 00 refers to to the first row in that table. (after a gap) RVA of the strong name signature hash, which comes straight after the CLI header. The RVA 0x2050 corresponds to file offset 0x250. The rest of the CLI header is mainly used in mixed-mode assemblies, and so is zeroed in this pure-IL assembly. After the CLI header comes the strong name hash, which is a SHA-1 hash of the assembly using the strong name key. After that comes the bodies of all the methods in the assembly concatentated together. Each method body starts off with a header, which I'll be looking at later. As you can see, this is a very small assembly with only 2 methods (an instance constructor and a Main method). After that, near the end of the .text section, comes the metadata, containing a metadata header and the 5 streams discussed above. We'll be looking at this in the next post. Conclusion The CLI header data doesn't have much to it, but we've covered some concepts that will be important in later posts - the logical structure of the CLR metadata and the overall layout of CLR data within the .text section. Next, I'll have a look at the contents of the #~ stream, and how the table data is arranged on disk.

    Read the article

  • how do I access XHR responseBody from Javascript?

    - by Cheeso
    I've got a web page that uses XMLHttpRequest to download a binary resource. Because it's binary I'm trying to use xhr.responseBody to access the bytes. I've seen a few posts suggesting that it's impossible to access the bytes directly from Javascript. This sounds crazy to me. Weirdly, xhr.responseBody is accessible from VBScript, so the suggestion is that I must define a method in VBScript in the webpage, and then call that method from Javascript. See jsdap for one example. var IE_HACK = (/msie/i.test(navigator.userAgent) && !/opera/i.test(navigator.userAgent)); if (IE_HACK) document.write('<script type="text/vbscript">\n\ Function BinaryToArray(Binary)\n\ Dim i\n\ ReDim byteArray(LenB(Binary))\n\ For i = 1 To LenB(Binary)\n\ byteArray(i-1) = AscB(MidB(Binary, i, 1))\n\ Next\n\ BinaryToArray = byteArray\n\ End Function\n\ </script>'); var xml = (window.XMLHttpRequest) ? new XMLHttpRequest() // Mozilla/Safari/IE7+ : (window.ActiveXObject) ? new ActiveXObject("MSXML2.XMLHTTP") // IE6 : null; // Commodore 64? xml.open("GET", url, true); if (xml.overrideMimeType) { xml.overrideMimeType('text/plain; charset=x-user-defined'); } else { xml.setRequestHeader('Accept-Charset', 'x-user-defined'); } xml.onreadystatechange = function() { if (xml.readyState == 4) { if (!binary) { callback(xml.responseText); } else if (IE_HACK) { // call a VBScript method to copy every single byte callback(BinaryToArray(xml.responseBody).toArray()); } else { callback(getBuffer(xml.responseText)); } } }; xml.send(''); Is this really true? The best way? copying every byte? For a large binary stream that's not gonna be very efficient. There is also a possible technique using ADODB.Stream, which is a COM equivalent of a MemoryStream. See here for an example. It does not require VBScript but does require a separate COM object. if (typeof (ActiveXObject) != "undefined" && typeof (httpRequest.responseBody) != "undefined") { // Convert httpRequest.responseBody byte stream to shift_jis encoded string var stream = new ActiveXObject("ADODB.Stream"); stream.Type = 1; // adTypeBinary stream.Open (); stream.Write (httpRequest.responseBody); stream.Position = 0; stream.Type = 1; // adTypeBinary; stream.Read.... /// ???? what here } I don't think that's gonna work - ADODB.Stream is disabled on most machines these days. In The IE8 developer tools - the IE equivalent of Firebug - I can see the responseBody is an array of bytes and I can even see the bytes themselves. The data is right there. I don't understand why I can't get to it. Is it possible for me to read it with responseText? hints? (other than defining a VBScript method)

    Read the article

  • .NET: Serializing object to a file from a 3rd party assembly

    - by MacGyver
    Below is a link that describes how to serialize an object. But it requires you implement from ISerializable for the object you are serializing. What I'd like to do is serialize an object that I did not define--an object based on a class in a 3rd party assembly (from a project reference) that is not implementing ISerializable. Is that possible? How can this be done? http://www.switchonthecode.com/tutorials/csharp-tutorial-serialize-objects-to-a-file Property (IWebDriver = interface type): private IWebDriver driver; Object Instance (FireFoxDriver is a class type): driver = new FirefoxDriver(firefoxProfile); ================ 3/21/2012 update after answer posted Why would this throw an error? It doesn't like this line: serializedObject.DriverInstance = (FirefoxDriver)driver; ... Error: Cannot implicitly convert type 'OpenQA.Selenium.IWebDriver' to 'OpenQA.Selenium.Firefox.FirefoxDriver'. An explicit conversion exists (are you missing a cast?) Here is the code: FirefoxDriverSerialized serializedObject = new FirefoxDriverSerialized(); Serializer serializer = new Serializer(); serializedObject = serializer.DeSerializeObject(@"C:\firefoxDriver.qa"); driver = serializedObject.DriverInstance; if (driver == null) { driver = new FirefoxDriver(firefoxProfile); serializedObject.DriverInstance = (FirefoxDriverSerialized)driver; serializer.SerializeObject(@"C:\firefoxDriver.qa", serializedObject); } Here are the two Serializer classes I built: public class Serializer { public Serializer() { } public void SerializeObject(string filename, FirefoxDriverSerialized objectToSerialize) { Stream stream = File.Open(filename, FileMode.Create); BinaryFormatter bFormatter = new BinaryFormatter(); bFormatter.Serialize(stream, objectToSerialize); stream.Close(); } public FirefoxDriverSerialized DeSerializeObject(string filename) { FirefoxDriverSerialized objectToSerialize; Stream stream = File.Open(filename, FileMode.Open); BinaryFormatter bFormatter = new BinaryFormatter(); objectToSerialize = (FirefoxDriverSerialized)bFormatter.Deserialize(stream); stream.Close(); return objectToSerialize; } } [Serializable()] public class FirefoxDriverSerialized : FirefoxDriver, ISerializable { private FirefoxDriver driverInstance; public FirefoxDriver DriverInstance { get { return this.driverInstance; } set { this.driverInstance = value; } } public FirefoxDriverSerialized() { } public FirefoxDriverSerialized(SerializationInfo info, StreamingContext ctxt) { this.driverInstance = (FirefoxDriver)info.GetValue("DriverInstance", typeof(FirefoxDriver)); } public void GetObjectData(SerializationInfo info, StreamingContext ctxt) { info.AddValue("DriverInstance", this.driverInstance); } } ================= 3/23/2012 update #2 - fixed serialization/de-serialization, but having another issue (might be relevant for new question) This fixed the calling code. Because we're deleting the *.qa file when we call the WebDriver.Quit() because that's when we chose to close the browser. This will kill off our cached driver as well. So if we start with a new browser window, we'll hit the catch block and create a new instance and save it to our *.qa file (in the serialized form). FirefoxDriverSerialized serializedObject = new FirefoxDriverSerialized(); Serializer serializer = new Serializer(); try { serializedObject = serializer.DeSerializeObject(@"C:\firefoxDriver.qa"); driver = serializedObject.DriverInstance; } catch { driver = new FirefoxDriver(firefoxProfile); serializedObject = new FirefoxDriverSerialized(); serializedObject.DriverInstance = (FirefoxDriver)driver; serializer.SerializeObject(@"C:\firefoxDriver.qa", serializedObject); } However, still getting this exception: Acu.QA.Main.Test_0055_GiftCertificate_UserCheckout: SetUp : System.Runtime.Serialization.SerializationException : Type 'OpenQA.Selenium.Firefox.FirefoxDriver' in Assembly 'WebDriver, Version=2.16.0.0, Culture=neutral, PublicKeyToken=1c2bd1631853048f' is not marked as serializable. TearDown : System.NullReferenceException : Object reference not set to an instance of an object. The 3rd line in this code block is throwing the exception: public void SerializeObject(string filename, FirefoxDriverSerialized objectToSerialize) { Stream stream = File.Open(filename, FileMode.Create); BinaryFormatter bFormatter = new BinaryFormatter(); bFormatter.Serialize(stream, objectToSerialize); // <=== this line stream.Close(); }

    Read the article

  • DTS to AC3 conversion for LG TV using mediatomb DLNA server

    - by prion crawler
    I want to convert a MKV video file containing DTS audio to a stream with AC3 audio. I want to pass this resulting stream to mediatomb's transcoding feature. Mediatomb will transfer the stream via DLNA to a LG TV, which does not support DTS audio. I have tried the VLC command below but the TV does not recognize the stream, and playing the destination stream on PC does not produce sound. vlc -vvv -I dummy INPUT.file --sout \ '#transcode{acodec=ac3,ab=256k,channels=2,threads=4} \ :std{mux=ts,access=file,dst=DEST.file}' The following ffmpeg command give a stream that plays on the TV with sound, but the ffmpeg process gets killed (with signal 15) within 10-15 seconds, and then the TV restarts the playback from the beginning. This goes on in loops. ffmpeg -i INPUT.file -acodec ac3 -ab 384k -vcodec copy \ -vbsf h264_mp4toannexb -f mpegts -y DEST.file I want to have a working DLNA server which transcodes DTS to AC3, any help is appreciated.

    Read the article

  • jQuery: why does my live() handler declaration error out when the analogous click() one doesn't?

    - by Jason
    I have the following in a javascript file (using jQuery as well): $(function(){ $('#mybutton').live('click',myObject.someMethod); }); var myObject = { someMethod: function() { //do stuff } }; I get a js error on pageload that says "myObject isn't defined". However, when I change the event handler in the doc.ready function to: $('#mybutton').live('click', function(){ myObject.someMethod(); }); it works! I have code structured like the first example all over my codebase that works. W T F??

    Read the article

  • Git for Websites / post-receive / Separation of Test and Production Sites

    - by Walt W
    Hi all, I'm using Git to manage my website's source code and deployment, and currently have the test and live sites running on the same box. Following this resource http://toroid.org/ams/git-website-howto originally, I came up with the following post-receive hook script to differentiate between pushes to my live site and pushes to my test site: while read ref do #echo "Ref updated:" #echo $ref -- would print something like example at top of file result=`echo $ref | gawk -F' ' '{ print $3 }'` if [ $result != "" ]; then echo "Branch found: " echo $result case $result in refs/heads/master ) git --work-tree=c:/temp/BLAH checkout -f master echo "Updated master" ;; refs/heads/testbranch ) git --work-tree=c:/temp/BLAH2 checkout -f testbranch echo "Updated testbranch" ;; * ) echo "No update known for $result" ;; esac fi done echo "Post-receive updates complete" However, I have doubts that this is actually safe :) I'm by no means a Git expert, but I am guessing that Git probably keeps track of the current checked-out branch head, and this approach probably has the potential to confuse it to no end. So a few questions: IS this safe? Would a better approach be to have my base repository be the test site repository (with corresponding working directory), and then have that repository push changes to a new live site repository, which has a corresponding working directory to the live site base? This would also allow me to move the production to a different server and keep the deployment chain intact. Is there something I'm missing? Is there a different, clean way to differentiate between test and production deployments when using Git for managing websites? As an additional note in light of Vi's answer, is there a good way to do this that would handle deletions without mucking with the file system much? Thank you, -Walt PS - The script I came up with for the multiple repos (and am using unless I hear better) is as follows: sitename=`basename \`pwd\`` while read ref do #echo "Ref updated:" #echo $ref -- would print something like example at top of file result=`echo $ref | gawk -F' ' '{ print $3 }'` if [ $result != "" ]; then echo "Branch found: " echo $result case $result in refs/heads/master ) git checkout -q -f master if [ $? -eq 0 ]; then echo "Test Site checked out properly" else echo "Failed to checkout test site!" fi ;; refs/heads/live-site ) git push -q ../Live/$sitename live-site:master if [ $? -eq 0 ]; then echo "Live Site received updates properly" else echo "Failed to push updates to Live Site" fi ;; * ) echo "No update known for $result" ;; esac fi done echo "Post-receive updates complete" And then the repo in ../Live/$sitename (these are "bare" repos with working trees added after init) has the basic post-receive: git checkout -f if [ $? -eq 0 ]; then echo "Live site `basename \`pwd\`` checked out successfully" else echo "Live site failed to checkout" fi

    Read the article

  • for what reason live USB linux images are not 100% persistent?

    - by gcb
    I understand that during CD-R times non-persistence had a purpose, but what's the purpose now that pretty much everyone uses USB flash drivers? not to mention USB3 sticks are pretty much 4x faster than my HD raid. I'm writing this while taking a break going over the linux from scratch guide... And i'm still baffled that this is not the norm already with all live images. So, Is there any reason (besides historical) that i'm missing and that will bite me after I finish this ext3-rw image?

    Read the article

  • Making a Live Thumb drive boot with Persistent files, settings AND *drivers* that load on boot?

    - by Luke Stanley
    I have seen https://wiki.ubuntu.com/LiveUsbPendrivePersistent but it's a mess. What methods support persistent drivers as well as files and settings and don't screw up lifespan of the flash drive? I'd like to see your personal recommendations on say, Portable Linux, USB Creator, Remastersys + Unetbootin, etc Backstory: I have a Inspiron 1525 that's hard drive has been slowly dying. I want to switch to a live USB/CD/DVD system until I can get it repaired but my laptops internal wifi device requires a network connection by another means for Xubuntu to let it work, and then I have to enter my Wifi key again, and THEN I have to reinstall Skype etc... I'd be damned every time I have to shut the laptop down. I'm ok with making a shell script for installing apps and copying settings as required but a good persistent install should make this old hat and slow and it doesn't take care of drivers. The last time I tried making an ISO with Remastersys it didn't seem to copy all the required settings.

    Read the article

  • Is there a server distro with the capability of syncing live data to multiple machines?

    - by Adam Hart
    Scenario: I have a main server that is used for pagebuilding/storing master data, and is accessed by a few clients on site. This company also has multiple branches with their own server that that connect to locally, but need to work with all the same data, and have it synchronized across all servers in real (or close) time. Is there a way/specific server OS that can sync live data across all of these servers? These servers would also need to be able to: Configure AFP, FTP, CIFS, SMB Continue to host their web server and database server in a Microsoft environment, but move the file server off to commodity hardware Just wondering if this is even possible.

    Read the article

  • What language/API to use for a standalone live-input audio visualizer app?

    - by knuckfubuck
    I develop with Actionscript and was glad to see that AIR 2.0 was going to give access to mic input data. I planned to use this to create a visualizer set to the tempo of the incoming live audio. After doing a few days of google research it seems unlikely that it will be possible to analyze the data of the mic input in Flash/AIR. If anyone has ideas on how I can achieve this in AIR please let me know. (I'm open to workarounds.) That being said, I don't want to give up on the idea so I'm interested in suggestions for other language/API to use. My requirements for the app are: Run on OSX Two windows - one that can go fullscreen while the other(controller GUI) stays put Able to access live mic input data I've done reading on FFT and understand what needs to be done on the sound side so no need to help with that.

    Read the article

  • How do I back up my Windows partition from an Ubuntu live CD?

    - by lalli
    My Windows partition (C:) is corrupt. I'm booting up from an Ubuntu live CD and trying to copy all the files from C: to my external drive, but the system expands all of the links, producing a projected copy size of 1.8TB (my external drive is just 1TB, and the data in c: is around 700MB). Then I looked at dd and other backup utilities. Anything I looked into, I couldn't figure out whether or not the image would be readable in Windows through any other app. Has anyone else tried to back up data from a corrupted Windows installation using Ubuntu?

    Read the article

  • Does RTSP live streaming just not work on Android 1.5/1.6?

    - by Aurora
    My main dev phone is a Nexus 1 running 2.2. I have successfully been streaming live video to this device from a Wowza server for several weeks now. I have now taken my application (without modifications) and put it on a Sony Ericsson Xperia running 1.6. The video will not play. I get the following errors: MediaPlayer: Couldn't open file on client side, trying server side ... MediaPlayer: info/warning (1/26) PlayerDriver: Command PLAYER_INIT completed with an error or info PVMFFailure MediaPlayer: Error:(1,-1) VideoView: Error: 1,-1 I've been googling around, but just can't seem to get a clear answer. Does anyone know if live streaming just doesn't work on some versions of Android?

    Read the article

  • Sending out spam using Gmail with live spaces links?

    - by FurtiveFelon
    Hey guys, I am puzzled as to how my email can get compromised. As an example, (DON'T click on this address if you are not sure if you are safe: katikaj2Bennetth74.spaces.live.com), that's the only link sent using my account, and it seems that the username is randomly generated. After clicking into it on my linux machine, it seems that the only post is a viagra ad. I have never clicked into any of the links sent by any of my contacts. Anyone know what could be causing this? Thanks a lot! Jason

    Read the article

  • glassfish v3 and java EE in production mode: what are the options to update a live web app?

    - by shadesco
    I am building a web app using java EE and glassfish v3. I want to move it to production mode soon, however i have zero experience with using glassfish in production, i would appreciate if you give me some guidance about how to approach the following scenario: say i have deployed the web app using admin console pointing to the .war file. But what if i want to update this live application, do i need to : a) undeploy -- build new war file (with updates) -- paste the war file to the app folder --redeploy? b) move in only the changed files , ie : .class files , jsp, etc... without undeploying before?

    Read the article

  • How can I tell if my live web-server is overloaded?

    - by Nick G
    We have a live webserver which doesn't seem to be performing all that well. It's a Dell PowerEdge machine, a few years old (dual core, 4GB) which is hosting about 20 low-traffic websites. However it doesn't seem to be as fast as it used to be. How can we determine the cause of this? If it's website traffic, I would be expecting high CPU but CPU usage is quite low and hovers around the 15-30% mark except for very brief periods. I'm wondering perhaps, if rather than CPU performance being a problem, perhaps it's disk thrashing due to the constant read/writes of all the small web files and database queries. It has 4x 7200 RPM SATA drives in RAID 5. So is there a way to check that it's not disk thrashing?

    Read the article

  • How to stop live network traffic displayed in terminal?

    - by Jakobud
    For our network we are working on building a new firewall box and we just installed Smoothwall on it to test it out. When I start up the box, before the login prompt even appears, all of the live IP traffic is appearing in the terminal (source/destination IPs, MACs, Ports, etc). I wait for the boot sequence to finish, but all I see is this IP traffic. The login prompt never comes up. I finally get sick of waiting and press CTRL + C and it says "Entering Run Level 3" and then I get a login prompt finally. Once I login, the IP traffic continues to fly through the terminal even as I'm trying to type commands. How do I turn this stuff off? Is this the default setting for Smoothwall to have all this IP traffic going by on the screen? It essentially renders using the terminal to being useless.

    Read the article

  • Smoothwall: How to stop live network traffic displayed in terminal?

    - by Jakobud
    For our network we are working on building a new firewall box and we just installed Smoothwall on it to test it out. When I start up the box, before the login prompt even appears, all of the live IP traffic is appearing in the terminal (source/destination IPs, MACs, Ports, etc). I wait for the boot sequence to finish, but all I see is this IP traffic. The login prompt never comes up. I finally get sick of waiting and press CTRL + C and it says "Entering Run Level 3" and then I get a login prompt finally. Once I login, the IP traffic continues to fly through the terminal even as I'm trying to type commands. How do I turn this stuff off? Is this the default setting for Smoothwall to have all this IP traffic going by on the screen? It essentially renders using the terminal to being useless.

    Read the article

  • What should I do about OEM05Mon.exe "Creative Live! Cam Console Auto Launcher".

    - by blackace
    OEM05Mon.exe "Creative Live! Cam Console Auto Launcher" is related perhaps to my 22inch Dell monitor). Has anyone got experience with this ? Do I need to have this running ? This application has a large footprint for what it does (well most of the time does nothing). I am tempted to just take it off the start up but wanted to double check... p.s: I am sure its the original application and not a virus or trojan faking to be it...

    Read the article

  • Does certain tags we write in PHP affects the performance of the live server???

    - by Sachindra
    I have written some tags in PHP as <a href="<?php bloginfo('url'); ?>/?cat=<?php $cate_id ?>"><?php echo $resid->post_content ?></a> or even this one echo "<li><a href = '?cat=$cate_id'>".$resid->post_content."</a></li>";?> Does this in any case affect the performance on the live server. I am no getting the image to appear on the live server after upload but on my local system(on my side) , things are fine..

    Read the article

  • Configuring LiveID authentication with SharePoint2010

    - by ybbest
    With the addition of the new claims based authentication framework in SharePoint 2010, SharePoint is now more loosely coupled to the authentication layer than ever. You’ve probably seen presentations or webinars where it was mentioned that you can use claims authentication against authentication providers such as Live ID and OpenID. In this blog I will show you the common problems while you configure you LiveID integration with SharePoint2010.The detailed configuration can be found in the following blogs. Part 1 – http://www.wictorwilen.se/Post/Visual-guide-to-Windows-Live-ID-authentication-with-SharePoint-2010-part-1.aspx Part 2 – http://www.wictorwilen.se/Post/Visual-guide-to-Windows-Live-ID-authentication-with-SharePoint-2010-part-2.aspx Part 3 – http://www.wictorwilen.se/Post/Visual-guide-to-Windows-Live-ID-authentication-with-SharePoint-2010-part-3.aspx Here are some problems I have following the instructions: Problem 1: If you had the following exceptions when you run the PowerShell scripts to create the new LiveID authentication provider New-SPTrustedIdentityTokenIssuer : Exception of type ‘System.ArgumentException’ was thrown.Parameter name: claimType At line:1 char:42 + $authp = New-SPTrustedIdentityTokenIssuer <<<< -Name “LiveID INT” -Description “LiveID INT” -Realm $realm -ImportTrustCertificate $certfile -ClaimsMappings $emailclaim,$upnclaim -SignInUrl “https://login.live-int.com/login.srf” -IdentifierClaim $emailclaim.InputClaimType + CategoryInfo : InvalidData:(Microsoft.Share…dentityProvider:SPCmdletNewSPIdentityProvider) [New-SPTrustedIdentityTokenIssuer], ArgumentException + FullyQualifiedErrorId :Microsoft.SharePoint.PowerShell.SPCmdletNewSPIdentityProvider Solution: You need to Remove the existing the SPTrustedIdentityTokenIssuer.     1. You need to first get the existing TokenIssuer name by Get-SPTrustedIdentityTokenIssuer, and then run Remove- SPTrustedIdentityTokenIssuer to remove the existing TokenIssuer.     2. After that , you can re-run the script , everything should work fine now. Problem 2: Live INT automatically logs out Whenever I try to log in (https://login.live-int.com/login.srf), after entering valid email/password I get redirected to the logout page. Solution: You can find the solution in my previous blog.

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >