Search Results

Search found 35102 results on 1405 pages for 'text mining'.

Page 50/1405 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • Point and click synonym replacement in text area with Javascript

    - by SilentD
    I am trying to create a site that will allow you to type a sentence or passage of text, then click on words to bring up a list of synonyms (from an online API) and possibly authorized abbreviations from a list that I provide, then once clicked on it would replace that word with the word that was clicked on. It would function kind of like After the Deadline or a Javascript based spell checker. Are there any libraries set up to make something like this easy, or what kind of Javascript do I need to be looking at? Are there any tutorials or examples for this kind of thing? I am aware that the source code for After the Deadline is available, but I only need a small portion of their technology, not all of the actual grammar and spelling check technology.

    Read the article

  • Long lines of text in source code [closed]

    - by ale
    Possible Duplicate: Is the 80 character limit still relevant in times of widescreen monitors? I used to set a vertical line set at 80 characters in my text editor and then I added carriage returns if the lines got too long. I later increased the value to 135 characters. I started using word wrap and not giving myself a limit but tried to keep lines short if I could because it took a lot of time shortening my lines. People at work use word wrap and don't give themselves a limit.. is this the correct way? What are you meant to do ? Many thanks.

    Read the article

  • What framework for text rating site?

    - by problemofficer
    I want to start a "rate my"-style site. The rated objects are mostly texts. I want it to be rather simple. Features I need: object rating (thumb up, thumb down) object comments object tags related object presentation based on tags user authentication and management private message system sanity checks for text inputs (i.e. prevention of code injections) cache open source runs on GNU/Linux I would gladly take something that is tailored for my scenario but a generic framework would be fine too. I simply don't want to write stuff like user authentication that is been written a million times and risking security flaws. Programming language is irrelevant but python/php preferred.

    Read the article

  • PHP Echo a large block of text

    - by Thomas
    Im new to PHP and I can't figure out what the rules are for using the echo function. For example, if I need to echo a large block of css/js, do I need to add echo to each line of text or is there a way to echo a large block of code with a single echo? When I try to echo a big block of code like this one, I get an error: if (is_single()) { echo '<link type="text/css" rel="stylesheet" href="http://jotform.com/css/styles/form.css"/><style type="text/css"> .form-label{ width:150px !important; } .form-label-left{ width:150px !important; } .form-line{ padding:10px; } .form-label-right{ width:150px !important; } body, html{ margin:0; padding:0; background:false; } .form-all{ margin:0px auto; padding-top:20px; width:650px !important; color:Black; font-family:Verdana; font-size:12px; } </style> <link href="http://jotform.com/css/calendarview.css" rel="stylesheet" type="text/css" /> <script src="http://jotform.com/js/prototype.js" type="text/javascript"></script> <script src="http://jotform.com/js/protoplus.js" type="text/javascript"></script> <script src="http://jotform.com/js/protoplus-ui.js" type="text/javascript"></script> <script src="http://jotform.com/js/jotform.js?v3" type="text/javascript"></script> <script src="http://jotform.com/js/location.js" type="text/javascript"></script> <script src="http://jotform.com/js/calendarview.js" type="text/javascript"></script> <script type="text/javascript"> JotForm.init(function(){ $('input_6').hint('ex: [email protected]'); }); </script>'; }else { } Is there a better way to echo large blocks of code without a lot of work (adding echo to each line for example)?

    Read the article

  • Align text box with menu control using boostrap

    - by JamesMitLondon
    Hi I am using bootstrap styles for my asp.net web application and I have a menu control at the top. I want to insert a search text box at the top right on the same line as the menu bar. Following is my code. Can anyone please suggest how to do this? Thanks. <div id="container"> <form runat="server" class="navbar-form navbar-left" role="search"> <div class = "navbar"> <div class="navbar-inner"> <div class="container"> <!-- .btn-navbar is used as the toggle for collapsed navbar content --> <a class="btn btn-navbar" data-target=".nav-collapse" data-toggle="collapse"> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </a> <div class="nav-collapse collapse"> <asp:Menu ID="NavigationMenu" runat="server" EnableViewState="false" IncludeStyleBlock="false" Orientation="Horizontal" CssClass="navbar" StaticMenuStyle-CssClass="nav" StaticSelectedStyle-CssClass="active" DynamicMenuStyle-CssClass="dropdown-menu"> <Items> <asp:MenuItem Text="Home" ToolTip="Home"></asp:MenuItem> <asp:MenuItem Text="Music" ToolTip="Music"> <asp:MenuItem Text="Classical" ToolTip="Classical" /> <asp:MenuItem Text="Rock" ToolTip="Rock" /> <asp:MenuItem Text="Jazz" ToolTip="Jazz" /> </asp:MenuItem> <asp:MenuItem Text="Movies" ToolTip="Movies"> <asp:MenuItem Text="Action" ToolTip="Action" /> <asp:MenuItem Text="Drama" ToolTip="Drama" /> <asp:MenuItem Text="Musical" ToolTip="Musical" /> </asp:MenuItem> </Items> </asp:Menu> <div class="form-group"> <input type="text" class="form-control" placeholder="Search"/> <button type="submit" class="btn btn-default">Submit</button> </div> </div> </div> </div> </div> </form> </div> `

    Read the article

  • How to get rank from full-text search query with Linq to SQL?

    - by Stephen Jennings
    I am using Linq to SQL to call a stored procedure which runs a full-text search and returns the rank plus a few specific columns from the table Article. The rank column is the rank returned from the SQL function FREETEXTTABLE(). I've added this sproc to the data model designer with return type Article. This is working to get the columns I need; however, it discards the ranking of each search result. I'd like to get this information so I can display it to the user. So far, I've tried creating a new class RankedArticle which inherits from Article and adds the column Rank, then changing the return type of my sproc mapping to RankedArticle. When I try this, an InvalidOperationException gets thrown: Data member 'Int32 ArticleID' of type 'Heap.Models.Article' is not part of the mapping for type 'RankedArticle'. Is the member above the root of an inheritance hierarchy? I can't seem to find any other questions or Google results from people trying to get the rank column, so I'm probably missing something obvious here.

    Read the article

  • PostgreSQL: Full Text Search - How to search partial words ?

    - by Anthoni Gardner
    Hello, Following a question posted here about how I can increase the speed on one of my SQL Search methods, I was advised to update my table to make use of Full Text Search. This is what I have now done, using Gist indexes to make searching faster. On some of the "plain" queries I have noticed a marked increase which I am very happy about. However, I am having difficulty in searching for partial words. For example I have several records that contain the word Squire (454) and I have several records that contain Squirrel (173). Now if I search for Squire it only returns the 454 records but I also want it to return the Squirrel records as well. My query looks like this SELECT title FROM movies WHERE vectors @@ to_tsoquery('squire'); I thought I could do to_tsquery('squire%') but that does not work. How do I get it to search for partial matches ? Also, in my database I have records that are movies and others that are just TV Shows. These are differentiated by the "" over the name, so like "Munsters" is a TV Show, whereas The Munsters is the film of the show. What I want to be able to do is search for just the TV Show AND just the movies. Any idea on how I can achieve this ? Regards Anthoni

    Read the article

  • How do you parse a paragraph of text into sentences? (perferrably in Ruby)

    - by henry74
    How do you take paragraph or large amount of text and break it into sentences (perferably using Ruby) taking into account cases such as Mr. and Dr. and U.S.A? (Assuming you just put the sentences into an array of arrays) UPDATE: One possible solution I thought of involves using a parts-of-speech tagger (POST) and a classifier to determine the end of a sentence: Getting data from Mr. Jones felt the warm sun on his face as he stepped out onto the balcony of his summer home in Italy. He was happy to be alive. CLASSIFIER Mr./PERSON Jones/PERSON felt/O the/O warm/O sun/O on/O his/O face/O as/O he/O stepped/O out/O onto/O the/O balcony/O of/O his/O summer/O home/O in/O Italy/LOCATION ./O He/O was/O happy/O to/O be/O alive/O ./O POST Mr./NNP Jones/NNP felt/VBD the/DT warm/JJ sun/NN on/IN his/PRP$ face/NN as/IN he/PRP stepped/VBD out/RP onto/IN the/DT balcony/NN of/IN his/PRP$ summer/NN home/NN in/IN Italy./NNP He/PRP was/VBD happy/JJ to/TO be/VB alive./IN Can we assume, since Italy is a location, the period is the valid end of the sentence? Since ending on "Mr." would have no other parts-of-speech, can we assume this is not a valid end-of-sentence period? Is this the best answer to the my question? Thoughts?

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • Using regex to extract variables from a plain-text form letter?

    - by Yaaqov
    Hi - I'm looking for a good example of using Regular Expressions in PHP to "reverse engineer" a form letter (with a known format, of course) that has been pasted into a multiline textbox and sent to a script for processing. So, for example, let's assume this is the original plain-text input (taken from a USDA press release): WASHINGTON, April 5, 2010 - North American Bison Co-Op, a New Rockford, N.D., establishment is recalling approximately 25,000 pounds of whole beef heads containing tongues that may not have had the tonsils completely removed, which is not compliant with regulations that require the removal of tonsils from cattle of all ages, the U.S. Department of Agriculture's Food Safety and Inspection Service (FSIS) announced today. For clarity, the fields that are variables are highlighted below: [pr_city=]WASHINGTON, [pr_date=]April 5, 2010 - [corp_name=]North American Bison Co-Op, a [corp_city=]New Rockford, [corp_state=]N.D., establishment is recalling approximately [amount=]25,000 pounds of [product=]whole beef heads containing tongues that may not have had the tonsils completely removed, which is not compliant with regulations that require [reason=]the removal of tonsils from cattle of all ages, the U.S. Department of Agriculture's Food Safety and Inspection Service (FSIS) announced today. How could I efficiently extract the contents of the pr_city pr_date corp_name corp_city corp_state amount product reason fields from my example? Any help would be appreciated, thanks.

    Read the article

  • Is ther any tool to extract keywords from a English Text or Article In Java?

    - by user555581
    Dear Experts, I am trying to identify the type of the web site(In English) by machine. I try to download the homepage of the web iste, download html page, parsing and get the content of the web page. Such as here are some context from CNN.com. I try to get the keywords of the web page, mapping with my database. If the keywords include like news, breaking news. The web site will go to the news web sites. If there exist some words like healthy, medical, it will be the medical web site. There exist some tools can do the text segmentation, but it is not easy to find a tool do the semantic, such as online shopping, it is a keywords, should not spilt two words. The combination will be helpful information. But "oneline", "shopping" will be less useful as it may exist online travel... • Newark, JFK airports reopen • 1 runway reopens at LaGuardia Airport • Over 4,155 flights were cancelled Monday • FULL STORY * LaGuardia Airport snowplows busy Video * Are you stranded? | Airport delays * Safety tips for winter weather * Frosty fun Video | Small dog, deep snow Latest news * Easter eggs used to smuggle cocaine * Salmonella forces cilantro, parsley recall * Obama's surprising verdict on Vick * Blue Note baritone Bernie Wilson dead * Busch aide to 911: She's not waking up * Girl, 15, last seen working at store in '90 * Teena Marie's death shocks fans * Terror network 'dismantled' in Morocco * Saudis: 'Militant' had al Qaeda ties * Ticker: Gov. blasts Obama 'birthers' * Game show goof is 800K mistakeVideo * Chopper saves calf on frozen pondVideo * Pickpocketing becomes hands-freeVideo * Chilean miners going to Disney World * Who's the most intriguing of 2010? * Natalie Portman is pregnant, engaged * 'Convert all gifts from aunt' CNNMoney * Who controls the thermostat at home? * This Just In: CNN's news blog

    Read the article

  • Please help debug this ASP.Net [VB] code. Trying to write to text file from SQL Server DB.

    - by NJTechGuy
    I am a PHP programmer. I have no .Net coding experience (last seen it 4 years ago). Not interested in code-behind model since this is a quick temporary hack. What I am trying to do is generate an output.txt file whenever the user submits new data. So an output.txt file if exists should be replaced with the new one. I want to write data in this format : 123|Java Programmer|2010-01-01|2010-02-03 124|VB Programmer|2010-01-01|2010-02-03 125|.Net Programmer|2010-01-01|2010-02-03 I don't know VB, so not sure about string manipulations. Hope a kind soul can help me with this. I will be grateful to you. Thank you :) <%@ Import Namespace="System.IO" %> <%@ Import Namespace="System.Data" %> <%@ Import Namespace="System.Data.SqlClient" %> <script language="vb" runat="server"> sub Page_Load(sender as Object, e as EventArgs) Dim sqlConn As New SqlConnection("Data Source=winsqlus04.1and1.com;Initial Catalog=db28765269;User Id=dbo2765469;Password=ByhgstfH;") Dim myCommand As SqlCommand Dim dr As SqlDataReader Dim FILENAME as String = Server.MapPath("Output4.txt") Dim objStreamWriter as StreamWriter ' If Len(Dir$(FILENAME)) > 0 Then Kill(FILENAME) objStreamWriter = File.AppendText(FILENAME) Try sqlConn.Open() 'opening the connection myCommand = New SqlCommand("SELECT id, title, CONVERT(varchar(10), expirydate, 120) AS [expirydate],CONVERT(varchar(10), creationdate, 120) AS [createdate] from tblContact where flag = 0 AND ACTIVE = 1", sqlConn) 'executing the command and assigning it to connection dr = myCommand.ExecuteReader() While dr.Read() objStreamWriter.WriteLine("JobID: " & dr(0).ToString()) objStreamWriter.WriteLine("JobID: " & dr(2).ToString()) objStreamWriter.WriteLine("JobID: " & dr(3).ToString()) End While dr.Close() sqlConn.Close() Catch x As Exception End Try objStreamWriter.Close() Dim objStreamReader as StreamReader objStreamReader = File.OpenText(FILENAME) Dim contents as String = objStreamReader.ReadToEnd() lblNicerOutput.Text = contents.Replace(vbCrLf, "<br>") objStreamReader.Close() end sub </script> <asp:label runat="server" id="lblNicerOutput" Font-Name="Verdana" />

    Read the article

  • How to restrict text search to a certain subset of the database ?

    - by Nikhil Garg
    I have a large central database of around 1 million heavy records. In my app, for every user I would have a subset of rows from central table, which would be very small (probably 100 records each).When a particular user has logged in , I would want to search on this data set only. Example: Say I have a central database of all cars in the world. I have a user profile for General Motors(GM) , Ferrari etc. When GM is logged in I just want to search(a full text search and not fire a sql query) for those cars which are manufactured by GM. Also GM may launch/withdraw a model in which case central db would be updated & so would be rowset associated with GM. In case of acquisitions, db of certain profiles may change without launch/removal of new car. So central db wont change then , but rowsets may. Whats the best way to implement such a design ? These smaller row sets would need to be dynamic depending on user activities. We are on Rails 2.3.5 and use thinking_sphinx as the connector and Sphinx/MySQL for search and relational associations.

    Read the article

  • Possible to rank partial matches in Postgres full text search?

    - by Joe
    I'm trying to calculate a ts_rank for a full-text match where some of the terms in the query may not be in the ts_vector against which it is being matched. I would like the rank to be higher in a match where more words match. Seems pretty simple? Because not all of the terms have to match, I have to | the operands, to give a query such as to_tsquery('one|two|three') (if it was &, all would have to match). The problem is, the rank value seems to be the same no matter how many words match. In other words, it's maxing rather than multiplying the clauses. select ts_rank('one two three'::tsvector, to_tsquery('one')); gives 0.0607927. select ts_rank('one two three'::tsvector, to_tsquery('one|two|three|four')); gives the expected lower value of 0.0455945 because 'four' is not the vector. But select ts_rank('one two three'::tsvector, to_tsquery('one|two')); gives 0.0607927 and likewise select ts_rank('one two three'::tsvector, to_tsquery('one|two|three')); gives 0.0607927 I would like the result of ts_rank to be higher if more terms match. Possible? To counter one possible response: I cannot calculate all possible subsequences of the search query as intersections and then union them all in a query because I am going to be working with large queries. I'm sure there are plenty of arguments against this anyway! Edit: I'm aware of ts_rank_cd but it does not solve the above problem.

    Read the article

  • 12c - SQL Text Expansion

    - by noreply(at)blogger.com (Thomas Kyte)
    Here is another small but very useful new feature in Oracle Database 12c - SQL Text Expansion.  It will come in handy in two cases:You are asked to tune what looks like a simple query - maybe a two table join with simple predicates.  But it turns out the two tables are each views of views of views and so on... In other words, you've been asked to 'tune' a 15 page query, not a two liner.You are asked to take a look at a query against tables with VPD (virtual private database) policies.  In order words, you have no idea what you are trying to 'tune'.A new function, EXPAND_SQL_TEXT, in the DBMS_UTILITY package makes seeing what the "real" SQL is quite easy. For example - take the common view ALL_USERS - we can now:ops$tkyte%ORA12CR1> variable x clobops$tkyte%ORA12CR1> begin  2          dbms_utility.expand_sql_text  3          ( input_sql_text => 'select * from all_users',  4            output_sql_text => :x );  5  end;  6  /PL/SQL procedure successfully completed.ops$tkyte%ORA12CR1> print xX--------------------------------------------------------------------------------SELECT "A1"."USERNAME" "USERNAME","A1"."USER_ID" "USER_ID","A1"."CREATED" "CREATED","A1"."COMMON" "COMMON" FROM  (SELECT "A4"."NAME" "USERNAME","A4"."USER#" "USER_ID","A4"."CTIME" "CREATED",DECODE(BITAND("A4"."SPARE1",128),128,'YES','NO') "COMMON" FROM "SYS"."USER$" "A4","SYS"."TS$" "A3","SYS"."TS$" "A2" WHERE "A4"."DATATS#"="A3"."TS#" AND "A4"."TEMPTS#"="A2"."TS#" AND "A4"."TYPE#"=1) "A1"Now it is easy to see what query is really being executed at runtime - regardless of how many views of views you might have.  You can see the expanded text - and that will probably lead you to the conclusion that maybe that 27 table join to 25 tables you don't even care about might better be written as a two table join.Further, if you've ever tried to figure out what a VPD policy might be doing to your SQL, you know it was hard to do at best.  Christian Antognini wrote up a way to sort of see it - but you never get to see the entire SQL statement: http://www.antognini.ch/2010/02/tracing-vpd-predicates/.  But now with this function - it becomes rather trivial to see the expanded SQL - after the VPD has been applied.  We can see this by setting up a small table with a VPD policy ops$tkyte%ORA12CR1> create table my_table  2  (  data        varchar2(30),  3     OWNER       varchar2(30) default USER  4  )  5  /Table created.ops$tkyte%ORA12CR1> create or replace  2  function my_security_function( p_schema in varchar2,  3                                 p_object in varchar2 )  4  return varchar2  5  as  6  begin  7     return 'owner = USER';  8  end;  9  /Function created.ops$tkyte%ORA12CR1> begin  2     dbms_rls.add_policy  3     ( object_schema   => user,  4       object_name     => 'MY_TABLE',  5       policy_name     => 'MY_POLICY',  6       function_schema => user,  7       policy_function => 'My_Security_Function',  8       statement_types => 'select, insert, update, delete' ,  9       update_check    => TRUE ); 10  end; 11  /PL/SQL procedure successfully completed.And then expanding a query against it:ops$tkyte%ORA12CR1> begin  2          dbms_utility.expand_sql_text  3          ( input_sql_text => 'select * from my_table',  4            output_sql_text => :x );  5  end;  6  /PL/SQL procedure successfully completed.ops$tkyte%ORA12CR1> print xX--------------------------------------------------------------------------------SELECT "A1"."DATA" "DATA","A1"."OWNER" "OWNER" FROM  (SELECT "A2"."DATA" "DATA","A2"."OWNER" "OWNER" FROM "OPS$TKYTE"."MY_TABLE" "A2" WHERE "A2"."OWNER"=USER@!) "A1"Not an earth shattering new feature - but extremely useful in certain cases.  I know I'll be using it when someone asks me to look at a query that looks simple but has a twenty page plan associated with it!

    Read the article

  • How to apply Data Mining (Association Rule) to a huge database ?

    - by stckvrflw
    Hello What I want to do is to apply Association method of data mining on my SQL Server 2000 database. Association rule is something like "finding the most frequent items that appear together in database." For those who don't know or who want to remember what is association method is like, take a look at this presentation about Association rule in Data Mining. www.authorstream.com/Presentation/a.besimi-233030-data-mining-intro-seeu-education-ppt-powerpoint 17th slide gives a nice example of applying association rule on a database. So Can you help me about how should I write my SQL codes (If they will be sufficient of course) Thanks.

    Read the article

  • how to text-align columns in DataGrid? (style DataGridCell)

    - by Olga
    I use WPF (C #). I use DataGrid. I want the first column is aligned with the center, the other columns are right-aligned. I have style: <Style x:Key="TextInCellCenter" TargetType="{x:Type TextBlock}" > <Setter Property="HorizontalAlignment" Value="Center"/> </Style> <Style TargetType="{x:Type DataGridCell}"> <Setter Property="HorizontalAlignment" Value="Right"/> </Style> DataGrid: <DataGrid> <DataGrid.Columns> <DataGridTextColumn ElementStyle="{StaticResource TextInCellCenter}" Binding="{Binding Path=Name}" /> <DataGridTextColumn Binding="{Binding Path=Number}" /> <DataGridTextColumn Binding="{Binding Path=Number}" /> <DataGridTextColumn Binding="{Binding Path=Number}" /> I have all the columns are right-aligned. Please tell me, how do I change the first column had a center text-alignment?

    Read the article

  • sIFR displays only the first line of text in Opera when on transparent background

    - by Gary
    I have implemented sIFR for the first time, on a test page. The code I have is below. It works fine in IE7, Firefox, Safari and Chrome, but in Opera only the first line of sIFR-ed text appears when the page first loads and after refreshing the page. But, if I scroll the page, all the text appears! It seems to have to do with transparency, because if I turn transparency off, it works fine. Please can someone help me to make this work? Thanks, Gary <link rel="stylesheet" href="sIFR-print.css" type="text/css" media="print" /> <link rel="stylesheet" href="all.css" type="text/css" media="all" /> <script src="sifr.js" type="text/javascript"></script> <script src="sifr-config.js" type="text/javascript"></script> <script type="text/javascript"> //<![CDATA[ var sIFRfont = { src: 'fontname.swf' }; sIFR.activate(sIFRfont); sIFR.replace(sIFRfont, { css: [ '.sIFR-root { line-height: 1em; font-size: 64px; color: #000000; background-color: blue; text-align: left; font-weight: normal; font-style: normal; text-decoration: none; visibility: hidden; }' ], fitExactly : true, forceClear : true, forceSingleLine : false, selector : 'div.flashtext', transparent : true }); //]]> </script>

    Read the article

  • Convert text files to excel files using python

    - by Rahim Jaafar
    I am working on INFORMIX 4GL programs. That programs produce output text files.This is an example of the output: Lot No|Purchaser name|Billing|Payment|Deposit|Balance| J1006|JAUHARI BIN HAMIDI|5285.05|4923.25|0.00|361.80| J1007|LEE, CHIA-JUI AKA LEE, ANDREW J. R.|5366.15|5313.70|0.00|52.45| J1008|NAZRIN ANEEZA BINTI NAZARUDDIN|5669.55|5365.30|0.00|304.25| J1009|YAZID LUTFI BIN AHMAD LUTFI|3180.05|3022.30|0.00|157.75| This text files can manually convert to excel files.But, I wanna ask, is there any script that I can use to convert .txt files to .xls files ? Hi all,now I'm already can convert text files to excell file by python using script that was given from user named Rami Helmy.A big thanks for him.But now,That script will produce more than one excell files depends on the number of '|' from the text files.Beside that,That script also can only convert one text files.I a going to convert all text files without state the name of text files.Therefore,I am looking such a way on how to this script going to: output only one excell file convert all .txt files from the directory that was given from user. output excell's file name are automaticly copied from the file name of text files. I am new in python,hopefully someone can help me to solve my problems.Thank You.. done all the task,but there was something that I'm confused.. that output excell files contains an "square" symbol like this: then, how can I ensure that there is no square symbol like that after I convert from text files to excell? thank you...

    Read the article

  • Which type of file parsing easiest and efficient and good ?(html,pdf,csv,text)

    - by Harikrishna
    I want to parse the html file, pdf file, csv file and text file. Now parsing for which type of file (specified above) is easiest and efficient ? Like parsing for html file is easiest and efficient OR parsing for pdf file is easiest and efficient OR parsing for csv file is easiest and efficient ? I am asking this question because I want to parse pdf ,html ,csv and text file through common parsing code if possible. And now suppose if parsing for html is easiest and efficient then : I will write the parsing code for html file and will try to convert pdf file to the html file(if possible)so the code written for parsing html file will also work for pdf file also. And thus I will try to convert pdf,csv and text file to html file.And write the code for parsing html file and thus this code will parse html,pdf,csv and text file. Suppose if parsing for pdf is easiest and efficient then : I will convert html,csv and text file to pdf and write the code for parsing pdf file.So the code for parsing pdf file can parse html,csv and text file. So my question is (1) Which type of file parsing is easiest and efficient (pdf,csv,html,text) ? (2) And converting files(pdf,text,html,csv) to eachother is possible. Like if html parsing easiest then pdf to html,text to html and csv to html.

    Read the article

  • Need a regular expression to parse a text body

    - by Ali
    Hi guys, I need a regular expression to parse a body of text. Basically assume this that we have text files and each of which contains random text but within the text there would be lines in the following formats - basically they are a format for denoting flight legs. eg: 13FEB2009 BDR7402 1000 UUBB 1020 UUWW FLT This line of text is always on one line The first word is a date in the format DDMMMYYYY Second word could be of any length and hold alphanumeric characters third word is the time in format HHMM - its always numeric fourth word is a location code - its almost always just alphabets but could also be alphanumeric fifth word is the arrival time in format HHMM - its always numeric sixth word is a location code - its almost always just alphabets but could also be alphanumeric Any words which follow on the same line are just definitions A text file may contain among lots of random text information one or more such lines of text. I need a way to be able to extract all this information i.e just these lines within a text file and store them with their integral parts separated as mentioned in an associative array so I have something like this: array('0'=>array('date'=>'', 'time-dept'=>'', 'flightcode'=>'',....)) I'm assuming regular expressions would be in order here. I'm using php for this - would appreciate the help guys :)

    Read the article

  • JavaScript: count minimal length of characters in text, ignoring special codes inside

    - by ilnur777
    I want to ignore counting the length of characters in the text if there are special codes inside in textarea. I mean not to count the special codes characters in the text. I use special codes to define inputing smileys in the text. I want to count only the length of the text ignoring special code. Here is my approximate code I tried to write, but can't let it work: // smileys // ======= function smileys(){ var smile = new Array(); smile[0] = "[:rolleyes:]"; smile[1] = "[:D]"; smile[2] = "[:blink:]"; smile[3] = "[:unsure:]"; smile[4] = "[8)]"; smile[5] = "[:-x]"; return(smile); } // symbols length limitation // ========================= function minSymbols(field){ var get_smile = smileys(); var text = field.value; for(var i=0; i<get_smile.length; i++){ for(var j=0; j<(text.length); j++){ if(get_smile[i]==text[j]){ text = field.value.replace(get_smile[i],""); } } } if(text.length < 50){ document.getElementById("saveB").disabled=true; } else { document.getElementById("saveB").disabled=false; } } How the script should be in order to let it work? Thank you!

    Read the article

  • Getting the highlighted text inside selected element only.

    - by ashays
    My goal is to be able to get the highlighted text within a document, but only if that text is within a given section, and then apply a certain style to that selected text after clicking a div tag. I'll explain what I mean: So, having looked at window.getSelection() and document.selection.createRange().text, I attempted to use elmnt.getSelection() or elmnt.selection.createRange().text for some HTML element, elmnt. However, it doesn't seem to work, so that idea seems pretty null. This means I can't use this idea to determine the text that is highlighted within a given location. In case this doesn't make sense, essentially, I want html code that looks like this: <body> <div id="content">Stuff here will not be effected</div> <div id="highlightable">Stuff here can be effected when highlighted</div> <div id="morecontent">Stuff here will also not be effected</div> </body> So that whenever I've highlighted text, clicking on a specified div will apply the proper CSS. Now, on to the div tags. Basically, here's what I've got on that: $('.colorpicker').click( function(e) { console.log(getSelectedText()); } Eventually, all I want this to highlight the selected text and have the div tag change the color of the selected text to that of the respective div tag that I've selected. Neither of these seems to be working right now, and my only guess for the reason of the div tag is that it unhighlights whatever I've got selected whenever I click on the div tag. Fallbacks: If there is more than one time that 'abc' is found on the page and I highlight to color 'abc', I would like that only that copy of 'abc' be highlighted. I know this is a lot in one question, but even if I could get a little head start on this idea, my next personal project would be going a lot more smoothly. Thanks. :)

    Read the article

  • Databinding Textbox to Form.Text (title)

    - by MysticEarth
    I'm trying to bind a Textbox.Text to Form.Text (which sets the title of the form). The binding itself works. But, the title isn't updated until I move the entire form. How can I achieve Form.Text being updated without moving the form? I'd like Form.Text being updated directly when I type something in the Textbox. Edit; I set the title of the Form in a TextChanged event which is fired by a ToolStripTextbox: public partial class ProjectForm : Form { public ProjectForm() { // my code contains all sorts of code here, // but nothing that has something to do with the text. } } private void projectName_TextChanged_1(object sender, EventArgs e) { this.Text = projectName.TextBox.Text; } And the Databinding version: public partial class ProjectForm : Form { public ProjectForm() { this.projectName.TextBox.DataBindings.Add("Text", this, "Text", true, DataSourceUpdateMode.OnValidation); } } Edit 2: I see I forgot to mention something. Don't know if it adds anything, but my apllication is a MDI application. The part of the title that changes is: ApplicationName [THIS CHANGES, BUT ONLY AFTER MOVING/RESIZING]

    Read the article

  • text-transform cannot change/be overwritten? (css)

    - by Radek
    I am studying css and I do not understand why some OTHER text and line 2 has red underline effect on them. I thought I either changed it for class others or reset it to none for class line2 I tried `text-decoration:none' for both .line2 and .others and the text was still red underlined. <html> <head> <style type="text/css"> .line {color:red; text-transform:uppercase; text-decoration:underline;} .line2 {color:blue; text-transform:none;} .others {color:black; text-transform:lowercase; text-decoration:blink; font-weight:bold;} </style> </head> <body> <div class='line'> line 1 <div class='others'><BR>some OTHER text<BR></DIV> <div class='line2'> <BR>line 2 </div> </div> </body> </html>

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >