Search Results

Search found 2808 results on 113 pages for 'volume mixer'.

Page 104/113 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Windows Media Player on top other DIV

    - by HP
    I have an embed window media player which is always on top of other DIV tags. I used wmode = opaque; WindowlessVideo = -1 but it does not help. Does anyone know how to make it appear below a certain element of the page. <object type="application/x-oleobject" classid="CLSID:6BF52A52-394A-11D3-B153-00C04F79FAA6" codebase= "http://activex.microsoft.com/activex/controls/mplayer/en/nsmp2inf.cab#Version=5,1,52,701" width="345" height="45"><param name="URL" value= "http://nhacso.net/Music/nghe_song.aspx?id=100004995" /> <param name="EnableContextMenu" value="0" /> <param name="uiMode" value="full" /> <param name="stretchToFit" value="True" /> <param name="AnimationAtStart" value="false" /> <param name="playcount" value="10" /> <param name="Volume" value="100" /> <param name="autostart" value="0" /> <param name="wmode" value="opaque" /> <param name="WindowlessVideo" value="-1" /> <embed src="http://nhacso.net/Music/nghe_song.aspx?id=100004995" type="application/x-mplayer2" width="345" height="45" align= "center" border="0" autostart="0" transparentatstart="1" animationatstart="1" showcontrols="true" showaudiocontrols="1" showpositioncontrols="0" enablecontextmenu="0" autosize="0" showstatusbar="1" displaysize="false" playcount="10" wmode="opaque" windowlessvideo="-1" /></object> Thanks

    Read the article

  • Whats wrong with this video embed code?

    - by jamietelin
    Following embed code is from http://hd.se/landskrona/2010/04/09/kunglig-glans-pa-idrottsgalan/ but it doesn't work in Internet Explorer 8. Firefox no problems. Any recommendations for improvements? Thanks for your time! <object width="480px" height="294px" id="_36313041" data="http://hd.se/static/media/html/flash/video-3/flowplayer.swf" type="application/x-shockwave-flash"> <param name="movie" value="http://hd.se/static/media/html/flash/video-3/flowplayer.swf" /> <param name="allowfullscreen" value="true" /> <param name="allowscriptaccess" value="always" /> <param name="flashvars" value='config={"key":"$3fff7448b28a8cffc85","contextMenu":["hd.se videospelare 1.0"],"plugins":{"rtmp":{"url":"http://hd.se/static/media/html/flash/video-3/flowplayer.rtmp.swf"},"controls":{"height":24,"opacity":1,"all":false,"play":true,"time":true,"scrubber":true,"playlist":false,"mute":true,"volume":true,"fullscreen":true,"backgroundColor":"#222222","backgroundGradient":"none","buttonColor":"#7c7c7c","buttonOverColor":"#36558b","progressColor":"#7c7c7c","bufferColor":"#7c7c7c","timeColor":"#ffffff","durationColor":"#ffffff","timeBgColor":"#222222","scrubberHeightRatio":0.5,"scrubberBarHeightRatio":0.5,"volumeSliderHeightRatio":0.5,"volumeBarHeightRatio":0.5,"autoHide":"fullscreen","hideDelay":1800,"tooltips":{"buttons":true,"play":"Spela","pause":"Paus","next":"Nästa","previous":"Föregående","mute":"Ljud av","unmute":"Ljud på","fullscreen":"Fullskärmsläge","fullscreenExit":"Lämna fullskärmsläge"},"tooltipColor":"#153872","tooltipTextColor":"#ffffff"},"contentIntro":{"url":"http://hd.se/static/media/html/flash/video-3/flowplayer.content.swf","top":0,"width":736,"border":"none","backgroundColor":"#202020","backgroundGradient":"none","borderRadius":"none","opacity":"85pct","display":"none","closeButton":true}},"canvas":{"backgroundColor":"#000000","backgroundGradient":"none"},"play":{"replayLabel":"Spela igen"},"screen":{"bottom":24},"clip":{"scaling":"fit","autoPlay":true},"playlist":[{"provider":"rtmp","netConnectionUrl":"rtmp://fl0.c06062.cdn.qbrick.com/06062","url":"ncode/hdstart","autoPlay":false,"scaling":"fit"},{"url":"http://hd.se/multimedia/archive/00425/_kunligglans_HD_VP6_425359a.flv","scaling":"fit","autoPlay":true},{"provider":"rtmp","netConnectionUrl":"rtmp://fl0.c06062.cdn.qbrick.com/06062","url":"ncode/hdstopp","autoPlay":true,"scaling":"fit"}]}' /> </object>

    Read the article

  • Greasemonkey failing to GM_setValue()

    - by HonoredMule
    I have a Greasemonkey script that uses a Javascript object to maintain some stored objects. It covers quite a large volume of information, but substantially less than it successfully stored and retrieved prior to encountering my problem. One value refuses to save, and I can not for the life of me determine why. The following problem code: Works for other larger objects being maintained. Is presently handling a smaller total amount of data than previously worked. Is not colliding with any function or other object definitions. Can (optionally) successfully save the problem storage key as "{}" during code startup. this.save = function(table) { var tables = this.tables; if(table) tables = [table]; for(i in tables) { logger.log(this[tables[i]]); logger.log(JSON.stringify(this[tables[i]])); GM_setValue(tables[i] + "_" + this.user, JSON.stringify(this[tables[i]])); logger.log(tables[i] + "_" + this.user + " updated"); logger.log(GM_getValue(tables[i] + "_" + this.user)); } } The problem is consistently reproducible and the logging statments produce the following output in Firebug: Object { 54,10 = Object } // Expansion shows complete contents as expected, but there is one oddity--Firebug highlights the array keys in purple instead of the usual black for anonymous objects. {"54,10":{"x":54,"y":10,"name":"Lucky Pheasant"}} // The correctly parsed string. bookmarks_HonoredMule saved undefined I have tried altering the format of the object keys, to no effect. Further narrowing down the issue is that this particular value is successfully saved as an empty object ("{}") during code initialization, but skipping that also does not help. Reloading the page confirms that saving of the nonempty object truly failed. Any idea what could cause this behavior? I've thoroughly explored the possibility of hitting size constraints, but it doesn't appear that can be the problem--as previously mentioned, I've already reduced storage usage. Other larger objects save still, and the total number of objects, which was not high already, has further been reduced by an amount greater than the quantity of data I'm attempting to store here.

    Read the article

  • Detect if any USB drive is detected or not using WinForm Application

    - by Pavan Kumar
    I want to do the following things in my application 1) I want to display whether any USB drive is inserted or not in my application to prompt the user to insert a USB drive. I just want to notify the user if any USB dirve is inserted else prompt him to insert one using a label or something (i want to avoid messagebox as it will keep appearing whenever a device is inserted or removed. It will be irritating for the end user) in my Visual C# WinForm Application. If any USB drive is present display "USB drive detected" in the label. The user may add one or more USB sticks but the status will remain same. When there is none then the status of the label will change to "No USB drives found.Please insert a USB drive". 2) When one or more USB drive is added the volume name with the drive letter for example "James(F:)" is added to the Combobox list. The combobox list also needs to remove the entry for the USB drive added in the list automatically when it is removed . So when there is no USB the list should be empty and the label will again prompt user to insert a USB stick or drive.

    Read the article

  • Total Average Week using a Parameter

    - by Jose
    I have a crystal report that shows sales volumes called week to date volume. It shows current week, previous week, and average week. The report prompts for a date parameter and I extract the week number to get current week and previous week volumes. Did it this way because Mngmt wants to be able to run report whenever. My problem is for Average Week I cant figure out how to get the number of weeks to divide by for my average. Report originates from June 1st, 2010. Right now I have: DATEPART("ww", {?date}) - DATEPART("ww", DATE(2010, 6, 1)) This returns 2 right now which is perfect, so i divide my total by 2. This code will work until the end of the year then I'm hooped. Any idea how I can make this a little more dynamic. I was thinking a counter somehow, just can't get the logic down because the date parameter will keep changing, meaning I cant increase my counter by 1 after each week??? Cheers.

    Read the article

  • VLC ActivX plugin not playing video in update IE9

    - by Prashant Mehta
    I am using vlc ActiveX plugin on web browser IE9 to play video live streaming. Its work perfect in IE8 but when I update browser from IE8 to IE9 than it not playing video file or live straming. here is my code. <object type="application/x-vlc-plugin" id="vlc" width="517" height="388" classid="clsid:9BE31822-FDAD-461B-AD51-BE1D1C159921"> <param name="MRL" id="mrlVideo" value="" /> <param name="volume" value="50" /> <param name="autoplay" value="True" /> <param name="loop" value="false" /> <param name="fullscreen" value="false" /> <param name="wmode" value="transparent" /> <param name="toolbar" value="true" /> <param name="windowless" value="true" /> </object> and in javascript I am using these var vlc = document.getElementById("vlc"); var options = new Array(":rtsp-tcp"); var urlVideofile = "hppt://IP:portnumber/" var id = vlc.playlist.add(urlVideofile, null, options); vlc.playlist.playItem(id); Here is attach image that showing what exactly error is coming Any help is greatly appreciated Thanks.

    Read the article

  • Web Audio API and mobile browsers

    - by Michael
    I've run into a problem while implementing sound and music into an HTML game that I'm building. I'm using the Web Audio API, loading all the sound files with XMLHttpRequests and decoding them into an AudioBufferSourceNode with AudioContext.prototype.decodeAudioData(). It looks something like this: var request = new XMLHttpRequest(); request.open("GET", "soundfile.ogg", true); request.responseType = "arraybuffer"; request.onload = function() { context.decodeAudioData(request.response) } request.send(); Everything plays fine, but on mobile the decodeAudioData takes an absurdly long time for the background music. I then tried using AudioContext.prototype.createMediaElementSource() to load the music from an HTML Audio object, since they support streaming and don't have to load the whole file into memory at once. It looked something like this: var audio = new Audio('soundfile.ogg'); var source = context.createMediaElementSource(audio); var mainVolume = context.createGain(); source.connect(mainVolume); mainVolume.connect(context.destination); This loads much faster, but the audio volume isn't affected by the gain node. Works fine on desktop, so I'm assuming this is a bug/limitation of mobile Chrome (testing on Android). Is there actually no good, well-performing way to handle sound on mobile browsers or am just I doing something stupid?

    Read the article

  • Play playlist with MediaPlayer

    - by Kaloer
    Hi, I'm trying to play a playlist I get using the MediaStore provider. However, when I try playing a playlist nothing happens. Can a MediaPlayer play a playlist (m3u file) and do I need to set the first track to play ? This is my test code in the onCreate() method: Uri uri = MediaStore.Audio.Playlists.EXTERNAL_CONTENT_URI; if(uri == null) { Log.e("Uri = null"); } String[] projection = new String[] { MediaStore.Audio.Playlists._ID, MediaStore.Audio.Playlists.NAME, MediaStore.Audio.Playlists.DATA }; Cursor c = managedQuery(uri, projection, null, null, null); if(c == null) { Toast.makeText(getApplicationContext(), R.string.alarm_tone_picker_error, Toast.LENGTH_LONG).show(); return; } if(!c.moveToFirst()) { c.close(); Toast.makeText(getApplicationContext(), R.string.alarm_tone_picker_no_music, Toast.LENGTH_LONG).show(); return; } c.moveToFirst(); try { MediaPlayer player = new MediaPlayer(); player.setDataSource(c.getString(2)); player.start(); } catch(Exception e) { e.printStackTrace(); } I have turned on every volume stream. Thanks you, Kaloer

    Read the article

  • Creating a network adapter - how hard is it?

    - by Vilx-
    I'm interested in building a little (commercial) device on top of Arduino. I want it to be able to interface with network. Network as in standard Ethernet, Cat5, RJ-45, etc. I know that there is an Ethernet Shield, but it costs even more than the Arduino itself, and it's pretty big. Naturally, I want my device to be as small and as cheap as possible. So I'm thinking about recreating an Ethernet module myself. The problem is - I haven't got any experience with Ethernet, nor do I have a good idea where to start looking. Thus I can't even say if my ideas are feasible. Ultimately I would like the device to have three ports - one for incoming signal, two for outgoing, so the device is essentially a little switch where it is plugged in itself as well. The switching capabilities need not be very fast - the volume of data will be low. 10Mbit is more than enough, can be even slower. If that is not possible, a single port for controlling the device itself will also do. Another possibility I'm considering is power line communications - sending information through power lines. That's another area I've no experience with. What hardware should I be looking at, and where can I find information about the necessary software? So - can anyone tell me if these ideas are feasible, and if yes - where should I start looking?

    Read the article

  • Good Hosting Providers With Zend Framework Support

    - by manyxcxi
    I currently use ixwebhosting for my hosting services. They're cheap and work (most of the time). The databases are horribly slow, the servers are horribly slow, and their support (though usually prompt) is tough to deal with. That being said, they're cheap, I've got like 20 domains hosted in my account, none of them are high volume, and they work JUST good enough- until today. This isn't meant to be a condemnation of ixwh though. Their prices are very low for what they do offer and most things work just fine, most of the time. I need to be able to host web apps written with Zend Framework in a fairly easy fashion. The server performance can't be worse than what I've already had (a pretty low hurdle to clear), and I don't want to spend $30/mo. These are not money making websites- they're projects. My requirements are PHP 5.3, ZF support, MySQL databases, multiple domains- not much. Who should I look at, and who should I look out for? Also- I put this on SO instead of SF because of the Zend Framework specific requirement. If I'm wrong, do as you wish.

    Read the article

  • Cannot disable index during PL/SQL procedure

    - by nw
    I've written a PL/SQL procedure that would benefit if indexes were first disabled, then rebuilt upon completion. An existing thread suggests this approach: alter session set skip_unusable_indexes = true; alter index your_index unusable; [do import] alter index your_index rebuild; However, I get the following error on the first alter index statement: SQL Error: ORA-14048: a partition maintenance operation may not be combined with other operations ORA-06512: [...] 14048. 00000 - "a partition maintenance operation may not be combined with other operations" *Cause: ALTER TABLE or ALTER INDEX statement attempted to combine a partition maintenance operation (e.g. MOVE PARTITION) with some other operation (e.g. ADD PARTITION or PCTFREE which is illegal *Action: Ensure that a partition maintenance operation is the sole operation specified in ALTER TABLE or ALTER INDEX statement; operations other than those dealing with partitions, default attributes of partitioned tables/indices or specifying that a table be renamed (ALTER TABLE RENAME) may be combined at will The problem index is defined so: CREATE INDEX A11_IX1 ON STREETS ("SHAPE") INDEXTYPE IS "SDE"."ST_SPATIAL_INDEX" PARAMETERS ('ST_GRIDS=890,8010,72090 ST_SRID=2'); This is a custom index type from a 3rd-party vendor, and it causes chronic performance degradation during high-volume update/insert/delete operations. Any suggestions on how to work around this error? By the way, this error only occurs within a PL/SQL block.

    Read the article

  • Why does my Perl TCP server script hang with many TCP connections?

    - by viraptor
    I've got a strange issue with a server accepting TCP connections. Even though there are normally some processes waiting, at some volume of connections it hangs. Long version: The server is written in Perl and binds a $srv socket with the reuse flag and listen == 5. Afterwards, it forks into 10 processes with a loop of $clt=$srv->accept(); do_processing($clt); $clt->shutdown(2); The client written in C is also very simple - it sends some lines, then receives all lines available and does a shutdown(sockfd, 2); There's nothing async going on and at the end both send and receive queues are empty (as reported by netstat). Connections last only ~20ms. All clients behave the same way, are the same implementation, etc. Now let's say I'm accepting X connections from client 1 and another X from client 2. Processes still report that they're idle all the time. If I add another X connections from client 3, suddenly the server processes start hanging just after accepting. The first blocking thing they do after accept(); is while (<$clt>) ... - but they don't get any data (on the first try already). Suddenly all 10 processes are in this state and do not stop waiting. On strace, the server processes seem to hang on read(), which makes sense. There are loads of connections in TIME_WAIT state belonging to that server (~100 when the problem starts to manifest), but this might be a red herring. What could be happening here?

    Read the article

  • Behavior of Struts2 and convention-plugin when there is Index(extends ActionSupport)

    - by hanishi
    We have an Action class named 'Index' immediately under com.example.common.action and is annotated @ParentPackage('default') which is declared in package directive in struts.xml and has "/" for its namespace and extends "struts-default". It also declares @Result so that it responses with jsp files corresponding the string values returned by its execute() method. In our struts.xml, the following struts setting is configured along with other necessary configurations that are needed for convention-plugin. <constant name="struts.action.extension" value=","/> When accessing /my_context/none_existing_path, the request apparently hits this Index class and the contents of the jsp declared in the Index's @Result section gets returned. However, if we provide /my_context/, we receive the following error: HTTP Status 404-There is no Action mapped for namespace[/] and action name [] associated with context path [/my_context]. We want to know the reason why accessing /my_context/none_existing_path, where none_existing_path has no matching action, can fallback to Index class, but error is returned when when the URL requested is just /my_context/. Currently, our convention-plugin settings are declared as follows: <constant name="struts.convention.package.locators.basePackage" value="com.example"/> <constant name="struts.convention.package.locators" value="action"/> Strangely, if we changed the value of the struts.convention.package.locators.basePackage to om.example.common, in which the aforementioned Index file can be immediately found by narrowing the search scope, requesting /my_context/ displays the content of the jsps declared in @Result section of the Index class. However, as our action classes are distributed throughout the com.example.[a-z].action packages, where [a-z] represents the large volume of directories we have in our package structure, we cannot use this trick as a workaround. We have also tried placing index.jsp at the top level of the class path, and have the index.jsp redirect to /my_context/index, which worked but not what we want. Could this be a bug? We appreciate your responses. Thank you in advance. EDIT: JIRA registered, problem solved (from Struts 2.3.12 up)

    Read the article

  • Mysql - wondering about scaling a twitter-like application ?

    - by user246114
    Hi, I'm developing an app that is vaguely similar to twitter, in that it allows users to follow one another. I wanted to do this using google app engine, for its scalability promises, but it's proving kind of difficult to get running for a few different reasons. I'd basically like to have a _users table, and a _followers table. Users go into the users table, follower relationships go into _followers. The problem is that each row in the users table will probably have like 100 corresponding records in the _followers table as users start following one another. So the number of rows is going to explode quickly. Using app engine, the volume [shouldn't] be a problem. If I go with mysql, and I do actually start to get some traction, how do I scale this up? Am I going to just end up moving to a distributed database in the end anyway? Should I fight it out with google app engine? I read that Twitter was using mysql, and they've run into this problem, and are now switching to cassandra. Thanks

    Read the article

  • Vector Usage in MPI(C++)

    - by lsk1985
    I am new to MPI programming,stiil learning , i was successful till creating the Derived data-types by defining the structures . Now i want to include Vector in my structure and want to send the data across the Process. for ex: struct Structure{ //Constructor Structure(): X(nodes),mass(nodes),ac(nodes) { //code to calculate the mass and accelerations } //Destructor Structure() {} //Variables double radius; double volume; vector<double> mass; vector<double> area; //and some other variables //Methods to calculate some physical properties Now using MPI i want to sent the data in the structure across the processes. Is it possible for me to create the MPI_type_struct vectors included and send the data? I tried reading through forums, but i am not able to get the clear picture from the responses given there. Hope i would be able to get a clear idea or approach to send the data PS: i can send the data individually , but its an overhead of sending the data using may MPI_Send/Recieve if we consider the domain very large(say 10000*10000)

    Read the article

  • Write file need to optimised for heavy traffic part 2

    - by Clayton Leung
    For anyone interest to see where I come from you can refer to part 1, but it is not necessary. write file need to optimised for heavy traffic Below is a snippet of code I have written to capture some financial tick data from the broker API. The code will run without error. I need to optimize the code, because in peak hours the zf_TickEvent method will be call more than 10000 times a second. I use a memorystream to hold the data until it reaches a certain size, then I output it into a text file. The broker API is only single threaded. void zf_TickEvent(object sender, ZenFire.TickEventArgs e) { outputString = string.Format("{0},{1},{2},{3},{4}\r\n", e.TimeStamp.ToString(timeFmt), e.Product.ToString(), Enum.GetName(typeof(ZenFire.TickType), e.Type), e.Price, e.Volume); fillBuffer(outputString); } public class memoryStreamClass { public static MemoryStream ms = new MemoryStream(); } void fillBuffer(string outputString) { byte[] outputByte = Encoding.ASCII.GetBytes(outputString); memoryStreamClass.ms.Write(outputByte, 0, outputByte.Length); if (memoryStreamClass.ms.Length > 8192) { emptyBuffer(memoryStreamClass.ms); memoryStreamClass.ms.SetLength(0); memoryStreamClass.ms.Position = 0; } } void emptyBuffer(MemoryStream ms) { FileStream outStream = new FileStream("c:\\test.txt", FileMode.Append); ms.WriteTo(outStream); outStream.Flush(); outStream.Close(); } Question: Any suggestion to make this even faster? I will try to vary the buffer length but in terms of code structure, is this (almost) the fastest? When memorystream is filled up and I am emptying it to the file, what would happen to the new data coming in? Do I need to implement a second buffer to hold that data while I am emptying my first buffer? Or is c# smart enough to figure it out? Thanks for any advice

    Read the article

  • User clicks on Image, video starts playing

    - by cf_PhillipSenn
    I have an image, and I would like to make the image link to an embedded YouTube video, such that if the user clicks on the image, it starts playing in the place where the picture used to be. <div id="myvideo"> <a href="http://www.youtube.com/watch?v=Msef24JErmU&playnext_from=TL&videos=dgzKE_Lyv7o"> <img src="starryeyedsurprise.jpg"></a> </div> Something like what the above code does, except not just hyperlink to it. So I know I have to have javaScript replace what's in #myvideo with: <object width="425" height="344"><param name="movie" value="http://www.youtube.com/v/Msef24JErmU&hl=en_US&fs=1&rel=0&color1=0x2b405b&color2=0x6b8ab6"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/Msef24JErmU&hl=en_US&fs=1&rel=0&color1=0x2b405b&color2=0x6b8ab6" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"></embed></object> And then automatically start playing. I guess I don't need it to be a YouTube video per se, I could just host the video on my own site (it's so low volume that I don't have to worry about serving up a video every once in a while).

    Read the article

  • How to add root node in the output method = text

    - by Akhil
    My sample input is <?xml version="1.0" encoding="UTF-8"?> <ns0:JDBC_RECEIVERDATA_MT_response xmlns:ns0="urn:parmalat.com.au:TESTSQL_REPLICATION"> <Statement_response> <response_1> <row> <XML_F52E2B61-18A1-11d1-B105-00805F49916B><![CDATA[<TransactionLog TID="1400" SeqNo="3337446" SQLTransaction="Insert into TankerLoads Values(141221,53,299,18,1,426148,6,&apos;Nov 19 2007 12:00AM&apos;,&apos;Dec 30 1899 12:59PM&apos;,3.00,20682,0,&apos;Zevo&apos;,&apos;Nov 19 2007 12:00AM&apos;,0)"/></row> <row> <XML_F52E2B61-18A1-11d1-B105-00805F49916B>um = 141221"/&gt;&lt;TransactionLog TID="1400" SeqNo="3337452" SQLTransaction="Insert into MilkPickups Values(790195,141221,0,&amp;apos;Nov 19 2007 12:00AM&amp;apos;,2433,&amp;apos;Nov 19 2007 12:00AM&amp;apos;,&amp;apos;Dec 30 1899 11:26AM&amp;apos;,3131,2.90)"/&gt; Like this I have mutiple records and my output should be like <root> <TransactionLog TID="1400" SeqNo="3337446" SQLTransaction="Insert into TankerLoads Values(141221,53,299,18,1,426148,6,'Nov 19 2007 12:00AM','Dec 30 1899 12:59PM',3.00,20682,0,'Zevo','Nov 19 2007 12:00AM',0)" /> <TransactionLog TID="1400" SeqNo="3337447" SQLTransaction="Update TankerLoads Set TankerNum = 53,DriverNum = 299,CarterNum = 18,MilkTypeNum = 1,SampleNum = 426148,ReceivalBayNum = 6,UnloadDate = 'Nov 19 2007 12:00AM',UnloadTime = 'Dec 30 1899 12:59PM',Temperature = 3.00,Volume = 20682,NetWeight = 0,WeighbridgeDocket = 'Zevo',LoadPickupDate = 'Nov 19 2007 12:00AM',IsValidated = 0 Where TankerLoadNum = 141221" /></root> AND I AM USING OUTPUT METHOD AS TEXT BECAUSE IF I USE XML THE TAG ARE REPLACED WITH &LT &GT WHICH I DONT MOREOVER IF YOU SEE THE ABOVE TWO ROWS THE LAST LINE IS HALF THE RECORD IS IN THE FIRST ROW AND CONTINUING THE OTHER HALF IN THE SECOND ROW SO i USED WHICH LEAVES SINGLE SPACE SO EVEN I DONT WANT THAT SINGLE SPACE..I HOPE I AM CLEAR IF NOT PLEASE LET ME KNOW I WILL ADD MORE COMMENTS..TPLEASE HELP ME OUT..THNKYOU

    Read the article

  • Handling Email Bouncebacks in Rails

    - by aressidi
    Hi there, I've built a very basic CRM system for my rails app that allows me to send weekly user activity digests with custom text and create multi-part marketing messages which I can configure and send through a basic admin interface. I'm happy with what I've put together on the send-side of things (minus the fact that I haven't tried to volume test its capabilities), but I'm concerned about how to handle bounce-backs. I came across this plugin with associated scripts: http://github.com/kovyrin/bounces-handler I'm using Google Apps to handle my mail and really don't know enough about Perl to want to mess with the above plugin - I have enough headaches. I'm looking for a simple solution for processing bounce-backs in Rails. All my email will go out from an address like this which will be managed in Google Apps: "[email protected]." What's the best workflow for this? Can anyone post an example solution they're using keeping in mind the fact that I'm using Google Apps for the mail? Any guidance, links, or basic workflow best-practices to handle this would be greatly appreciated. Thanks! -A

    Read the article

  • Is there a unique computer identifier that can be used reliably even in a virtual machine?

    - by SaUce
    I'm writing a small client program to be run on a terminal server. I'm looking for a way to make sure that it will only run on this server and in case it is removed from the server it will not function. I understand that there is no perfect way of securing it to make it impossible to ran on other platforms, but I want to make it hard enough to prevent 95% of people to try anything. The other 5% who can hack it is not my concern. I was looking at different Unique Identifiers like Processor ID, Windows Product ID, Computer GUID and other UIs. Because the terminal server is a virtual machine, I cannot locate anything that is completely unique to this machine. Any ideas on what I should look into to make this 95% secure. I do not have time or the need to make it as secure as possible because it will defeat the purpose of the application itself. I do not want to user MAC address. Even though it is unique to each machine it can be easily spoofed. As far as Microsoft Product ID, because our system team clones VM servers and we use corporate volume key, I found already two servers that I have access to that have same Product ID Number. I have no Idea how many others out there that have same Product ID By 95% and 5% I just simply wanted to illustrate how far i want to go with securing this software. I do not have precise statistics on how many people can do what. I believe I might need to change my approach and instead of trying to identify the machine, I will be better off by identifying the user and create group based permission for access to this software.

    Read the article

  • Mounting NAS share: Bad Address

    - by Korben
    I've faced to the problem that can't solve. Hope you can help me with it. I have a storage QNAP TS-459U, with it's own Linux, and 'massive1' folder shared, which I need to mount to my Debian server. They are connected by regular patch cord. Debian server has two network interfaces - eth0 and eth1. eth0 is for Internet, eth1 is for QNAP. So, I'm saying this: mount -t cifs //169.254.100.100/massive1/ /mnt/storage -o user=admin , where 169.254.100.100 is an IP of QNAP's interface. The result I get (after entering password): mount error(14): Bad address Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) Tried: mount.cifs, smbmount, with '/' at the end of the network share and without it, and many other variations of that command. And always its: mount error(14): Bad address Funny thing is when I was in Data Center, I had connected my netbook to QNAP by the same scheme (with Fedora 16 on it), and it connected without any problems, I could read/write files on the QNAP's NAS share! So I'm really stuck with the Debian. I can't undrestand where's the difference with Fedora, making this error. Yeah, I've used Google. Couldn't find any useful info. Ping to the QNAP's IP is working, I can log into QNAP's Linux by ssh, telnet on 139's port is working. This is network interface configuration I use in Debian: IP: 169.254.100.1 Netmask: 255.255.0.0 The only diffence in connecting to Fedora and Debian is that in Fedora I've added gateway - 169.254.100.129, but ping to this IP is not working, so I think it's not necessary at all. P.S. ~# cat /etc/debian_version wheezy/sid ~# uname -a Linux host 2.6.32-5-openvz-amd64 #1 SMP Mon Mar 7 22:25:57 UTC 2011 x86_64 GNU/Linux ~# smbtree WORKGROUP \\HOST host server \\HOST\IPC$ IPC Service (host server) \\HOST\print$ Printer Drivers NAS \\MASSIVE1 NAS Server \\MASSIVE1\IPC$ IPC Service (NAS Server) \\MASSIVE1\massive1 \\MASSIVE1\Network Recycle Bin 1 [RAID5 Disk Volume: Drive 1 2 3 4] \\MASSIVE1\Public System default share \\MASSIVE1\Usb System default share \\MASSIVE1\Web System default share \\MASSIVE1\Recordings System default share \\MASSIVE1\Download System default share \\MASSIVE1\Multimedia System default share Please, help me with solving this strange issue. Thanks before.

    Read the article

  • 3Ware 9650SE RAID-6, two degraded drives, one ECC, rebuild stuck

    - by cswingle
    This morning I came in the office to discover that two of the drives on a RAID-6, 3ware 9650SE controller were marked as degraded and it was rebuilding the array. After getting to about 4%, it got ECC errors on a third drive (this may have happened when I attempted to access the filesystem on this RAID and got I/O errors from the controller). Now I'm in this state: > /c2/u1 show Unit UnitType Status %RCmpl %V/I/M Port Stripe Size(GB) ------------------------------------------------------------------------ u1 RAID-6 REBUILDING 4%(A) - - 64K 7450.5 u1-0 DISK OK - - p5 - 931.312 u1-1 DISK OK - - p2 - 931.312 u1-2 DISK OK - - p1 - 931.312 u1-3 DISK OK - - p4 - 931.312 u1-4 DISK OK - - p11 - 931.312 u1-5 DISK DEGRADED - - p6 - 931.312 u1-6 DISK OK - - p7 - 931.312 u1-7 DISK DEGRADED - - p3 - 931.312 u1-8 DISK WARNING - - p9 - 931.312 u1-9 DISK OK - - p10 - 931.312 u1/v0 Volume - - - - - 7450.5 Examining the SMART data on the three drives in question, the two that are DEGRADED are in good shape (PASSED without any Current_Pending_Sector or Offline_Uncorrectable errors), but the drive listed as WARNING has 24 uncorrectable sectors. And, the "rebuild" has been stuck at 4% for ten hours now. So: How do I get it to start actually rebuilding? This particular controller doesn't appear to support /c2/u1 resume rebuild, and the only rebuild command that appears to be an option is one that wants to know what disk to add (/c2/u1 start rebuild disk=<p:-p...> [ignoreECC] according to the help). I have two hot spares in the server, and I'm happy to engage them, but I don't understand what it would do with that information in the current state it's in. Can I pull out the drive that is demonstrably failing (the WARNING drive), when I have two DEGRADED drives in a RAID-6? It seems to me that the best scenario would be for me to pull the WARNING drive and tell it to use one of my hot spares in the rebuild. But won't I kill the thing by pulling a "good" drive in a RAID-6 with two DEGRADED drives? Finally, I've seen reference in other posts to a bad bug in this controller that causes good drives to be marked as bad and that upgrading the firmware may help. Is flashing the firmware a risky operation given the situation? Is it likely to help or hurt wrt the rebuilding-but-stuck-at-4% RAID? Am I experiencing this bug in action? Advice outside the spiritual would be much appreciated. Thanks.

    Read the article

  • How do I configure Tomcat services in Ubuntu?

    - by Karan
    I have created a Tomcat script inside the /etc/init.d directory which is #!/bin/bash # description: Tomcat Start Stop Restart # processname: tomcat # chkconfig: 234 20 80 JAVA_HOME=/usr/java/jdk1.6.0_30 export JAVA_HOME PATH=$JAVA_HOME/bin:$PATH export PATH CATALINA_HOME=/usr/tomcat/apache-tomcat-6.0.32 case $1 in start) sh $CATALINA_HOME/bin/startup.sh ;; stop) sh $CATALINA_HOME/bin/shutdown.sh ;; restart) sh $CATALINA_HOME/bin/shutdown.sh sh $CATALINA_HOME/bin/startup.sh ;; esac exit 0 After this I am trying to add this into chkconfig which is as [root@blanche init.d]# chkconfig --add tomcat [root@blanche init.d]# chkconfig --level 234 tomcat on But it is giving me the following error: [root@blanche init.d]:/etc/init.d$ chkconfig --add tomcat insserv: warning: script 'K20acpi-support' missing LSB tags and overrides insserv: warning: script 'tomcat' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'failsafe-x' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'acpid' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'dmesg' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'udevmonitor' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'ufw' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'module-init-tools' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'plymouth-splash' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'gdm' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'rsyslog' missing LSB tags and overrides insserv: warning: current start runlevel(s) (0 6) of script `wpa-ifupdown' overwrites defaults (empty). The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'hwclock' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'console-setup' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'udev' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'plymouth-log' missing LSB tags and overrides insserv: warning: current start runlevel(s) (0) of script `halt' overwrites defaults (empty). The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'mysql' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'atd' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'network-manager' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'alsa-mixer-save' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'udev-finish' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'screen-cleanup' missing LSB tags and overrides insserv: warning: script 'acpi-support' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'avahi-daemon' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'dbus' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'procps' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'irqbalance' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'plymouth-stop' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'anacron' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'plymouth' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'udevtrigger' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'hostname' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'hwclock-save' missing LSB tags and overrides insserv: warning: current start runlevel(s) (0 6) of script `networking' overwrites defaults (empty). insserv: warning: current start runlevel(s) (0 6) of script `umountfs' overwrites defaults (empty). insserv: warning: current start runlevel(s) (0 6) of script `umountnfs.sh' overwrites defaults (empty). The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'network-interface' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'network-interface-security' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'cron' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'apport' missing LSB tags and overrides insserv: warning: current start runlevel(s) (6) of script `reboot' overwrites defaults (empty). insserv: warning: current start runlevel(s) (0 6) of script `umountroot' overwrites defaults (empty). insserv: warning: current start runlevel(s) (0 6) of script `sendsigs' overwrites defaults (empty). insserv: There is a loop between service rsyslog and pulseaudio if stopped insserv: loop involving service pulseaudio at depth 3 insserv: loop involving service rsyslog at depth 2 insserv: loop involving service udev at depth 1 insserv: There is a loop between service rsyslog and pulseaudio if stopped insserv: loop involving service bluetooth at depth 2 insserv: exiting now without changing boot order! /sbin/insserv failed, exit code 1 tomcat 0:off 1:off 2:off 3:off 4:off 5:off 6:off Please suggest what to do for configuring a Tomcat server as a service.

    Read the article

  • IRP_MJ_WRITE latency up to 15 seconds

    - by racitup
    We have written an application that performs small (22kB) writes to multiple files at once (one thread performing asynchronous queued writes to multiple locations on behalf of other threads) on the same local volume (RAID1). 99.9% of the writes are low-latency but occasionally (maybe every minute or two) we get one or two huge latency writes (I have seen 10 seconds and above) without any real explanation. Platform: Win2003 Server with NTFS. Monitoring: Sysinternals Process Monitor (see link below) and our own application logging. We have tried multiple things to try and solve this that have been gleaned from a few websites, e.g.: Making the first part of file names unique to aid 8.3 name generation Writing files to multiple directories Changing Intel Disk Write Caching Windows File/Printer Sharing Minimize memory used Balance Maximize data throughput for file sharing Maximize data throughput for network applications System-Advanced-Performance-Advanced NtfsDisableLastAccessUpdate - use fsutil behavior set disablelastaccess 1 disable 8.3 name generation - use "fsutil behavior set disable8dot3 1" + restart Enable a large size file system cache Disable paging of the kernel code IO Page Lock Limit Turn Off (or On) the Indexing Service But nothing seems to make much difference. There's a whole host of things we haven't tried yet but we wondered if anyone had come across the same problem, a reason and a solution (programmatic or not)? We can reproduce the problem using IOMeter and a simple setup: Start IOMeter and remove all but the first worker thread in 'Topology' using the disconnect button. Select the Worker thread and put a cross in the box next to the disk you want to use in the Disk Targets tab and put '2000000' in Maximum Disk Size (NOTE: must have at least 1GB free space; sector size is 512 bytes) Next create a new access specification and add it to the worker thread: Transfer Request Size = 22kB 100% Sequential Percent of Access Spec = 100% Percent Read/Write = 100% Write Change Results Display Update Frequency to 5 seconds, Test Setup Run Time to 20 seconds and both 'Number of Workers to Spawn Automatically' settings to zero. Select the Worker Thread in the Topology panel and hit the Duplicate Worker button 59 times to create 60 threads with identical settings. Hit the 'Go' button (green flag) and monitor the Results tab. The 'Maximum I/O Response Time (ms)' always hits at least 3500 on our machine. Our machine isn't exactly slow (Xeon 8 core rack server with 4GB and onboard RAID). I'd be interested to see what other people get. We have a feeling it might be something to do with the NTFS filesystem (ours is currently 75% full of fragmented files) and we are going to try a few things around this principle. But it is also related to disk performance since we don't see it on a RAMDisk and it's not as severe on a RAID10 array. Any help is much appreciated. Richard Right-click and select 'Open Link in New Tab': ProcMon Result

    Read the article

  • chkdsk, SeaTools, and "does not have enough space to replace bad clusters"

    - by Zian Choy
    When I tried to do a Windows Vista Complete PC Backup, I received an error message that blathered about bad sectors. Then, when I ran chkdsk /r on the destination drive, this is what I got: C:\Windows\system32>chkdsk /R E: The type of the file system is NTFS. Volume label is Desktop Backup. CHKDSK is verifying files (stage 1 of 5)... 822016 file records processed. File verification completed. 1 large file records processed. 0 bad file records processed. 0 EA records processed. 0 reparse records processed. CHKDSK is verifying indexes (stage 2 of 5)... 848938 index entries processed. Index verification completed. 0 unindexed files processed. CHKDSK is verifying security descriptors (stage 3 of 5)... 822016 security descriptors processed. Security descriptor verification completed. 13461 data files processed. CHKDSK is verifying file data (stage 4 of 5)... The disk does not have enough space to replace bad clusters detected in file 239649 of name . The disk does not have enough space to replace bad clusters detected in file 239650 of name . The disk does not have enough space to replace bad clusters detected in file 239651 of name . An unspecified error occurred.f 822000 files processed) Yet, when I ran the SeaTools short & long generic tests on the Seagate disk, I didn't receive any errors. I know that I could reformat the disk and try running chkdsk /r again but I'd prefer to avoid waiting 4 hours in the hope that the problem was magically fixed. On the other hand, if I RmA the drive to Seagate, I have no SeaTools error number to use and they may claim that the drive is just fine. What should I try to do next? Side frustration: There is plenty of free hard drive space. The E: partition has 182 GB free.

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >