Search Results

Search found 28760 results on 1151 pages for 'search folder'.

Page 578/1151 | < Previous Page | 574 575 576 577 578 579 580 581 582 583 584 585  | Next Page >

  • deep linking in Excel sheets exported to html

    - by pomarc
    hello everybody, I am working on a project where I must export to html a lot of Excel files. This is pretty straightforward using automation and saving as html. The problem is that many of these sheets have links to worksheets of some other files. I must find a way to write a link to a single inner worksheet. When you export a multisheet excel file to html, excel creates a main htm file, a folder named filename_file, and inside this folder it writes down several files: a css, an xml list of files, a file that creates the tab bar and several html files named sheetxxx.htm, each one representing a worksheet. When you open the main file, you can click the menu bar at the bottom which lets you select the appropriate sheet. This is in fact a link, which replaces a frame content with the sheetxxx.htm file. When this file is loaded a javascript function that selects the right tab gets called. The exported files will be published on a web site. I will have to post process each file and replace every link to the other xls files to the matching htm file, finding a way to open the right worksheet. I think that I could add a parameter to the processed htm file link url, such as myfile.htm?sh=sheet002.htm if I want to link to the second worksheet of myfile.htm (ex myfile.xls). After I've exported them, I could inject a simple javascript into each of the main files which, when they are loaded, could retrieve the sh parameter with jQuery (this is easy) and use this to somehow replace the frSheet frame contents (where the sheets get loaded), opening the right inner sheet and not the default sheet (this is what I call deep linking) mimicking what happens when a user clicks on a tab. This last step is missing... :) I am considering different options, such as replacing the source of the $("frSheet") frame after document.ready. I'd like to hear from you any advice on what could be the best way to realize that in your opinion. any help is greately appreciated, many thanks.

    Read the article

  • JMeter CSV Data Set is corrupting Japanese strings stored as proper UTF-8, I get Question Marks instead

    - by Mark Bennett
    I read in search terms from a simple text file to send to a search engine. It works fine in English, but gives me ???? for any Japanese text. Text with mixed English and Japanese does show the English text, so I know it's reading it. What I'm seeing: Input text: Snow Leopard ??????????????? Turns into: Snow Leopard ??????????????? This is in my POST field of an HTTP. If I set JMeter to encode the data, it just puts in the percent sequence for question marks. Interesting note: In the example above there are 15 Japanese characters, and then 15 question marks, so at some point it's being seen as full characters and not just bytes. About the Data: The CSV file is very simple in structure. There's only one field / one column, which I name TERM, and later use as ${TERM} I don't really need full CSV because it's only one string per line. There's no commas or quotes. When I run the Unix "file" command on the file, it says UTF-8 text. I've also verified it in command line and graphical mode on two machines. JMeter CSV Dataset Config: Filename: japanese-searches.csv File encoding: UTF-8 (also tried without) Variable names: TERM Delimiter: , Allow Quoted Data: False (I also tried True, different, but still wrong) Recycle at EOF: True Stop at EOF: False Staring mode: All threads A few things I've tried: Tried Allow quoted Data. It changed to other strange characters. -Dfile.encoding=UTF-8 Tried encoding the POST, but it just turned into a bunch of %nn for question marks And I'm not sure how "debug" just after the each line of the CSV is read in. I think it's corrupted right away, but I'm not sure. If it's only mangled when I reference it, then instead of ${TERM} perhaps there's some other "to bytes" function call. I'll start checking into that. I haven't done anything with the JMeter functions yet.

    Read the article

  • IE7: container of hidden Div displays incorrectly

    - by dmr
    There is a search box div that contains a boxhead div and a boxbody div. Inside the boxbody div, there is a searchToggle div. When the user clicks a certain link on the page, the display style property of the searchToggle div is toggled between block and none. The 2 background-images for the body of the search box are set via the css of the searchBox div and the boxbody div. In IE7, when the searchToggle div is hidden, the background-image from the searchBox div extends on the left more than it should. It shows up correctly when the display of the searchToggle div is block. Everything show up correctly, in both cases in IE8 and FF. Any ideas why this is happening? The relevant HTML: <div class="searchBox"> <div class="boxhead"> <h2></h2> </div> <div class="boxbody"> <div id="searchToggle" name="searchToggle"> </div> </div> </div> The relevant CSS: .searchBox { margin: 0 auto; width: 700px; background: url(/images/myImageRight-r.gif) no-repeat bottom right; font-size: 100%; text-align: left; overflow: hidden; } .boxbody { margin: 0; padding: 5px 30px 31px; background-image: url(/images/myImageLeft.gif); background-repeat: no-repeat; background-position: left bottom; }

    Read the article

  • What could possibly be different between the table in a DataContext and an IQueryable<Table> when do

    - by Nate Bross
    I have a table, where I need to do a case insensitive search on a text field. If I run this query in LinqPad directly on my database, it works as expected Table.Where(tbl => tbl.Title.Contains("StringWithAnyCase") In my application, I've got a repository which exposes IQueryable objects which does some initial filtering and it looks like this var dc = new MyDataContext(); public IQueryable<Table> GetAllTables() { var ret = dc.Tables.Where(t => t.IsActive == true); return ret; } In the controller (its an MVC app) I use code like this in an attempt to mimic the LinqPad query: var rpo = new RepositoryOfTable(); var tables = rpo.GetAllTables(); // for some reason, this does a CASE SENSITIVE search which is NOT what I want. tables = tables.Where(tbl => tbl.Title.Contains("StringWithAnyCase"); return View(tables); The column is defiend as an nvarchar(50) in SQL Server 2008. Any help or guidance is greatly appreciated!

    Read the article

  • Outlook Interop: Password protected PST file headache

    - by Ed Manet
    Okay, I have no problem identifying the .PST file using the Outlook Interop assemblies in a C# app. But as soon as I hit a password protected file, I am prompted for a password. We are in the process of disabling the use of PSTs in our organization and one of the steps is to unload the PST files from the users' Outlook profile. I need to have this app run silently and not prompt the user. Any ideas? Is there a way to create the Outlook.Application object with no UI and then just try to catch an Exception on password protected files? // create the app and namespace Application olApp = new Application(); NameSpace olMAPI = olApp.GetNamespace("MAPI"); // get the storeID of the default inbox string rootStoreID = olMAPI.GetDefaultFolder(OlDefaultFolders.olFolderInbox).StoreID; // loop thru each of the folders foreach (MAPIFolder fo in olMAPI.Folders) { // compare the first 75 chars of the storeid // to prevent removing the Inbox folder. string s1 = rootStoreID.Substring(1, 75); string s2 = fo.StoreID.Substring(1, 75); if (s1 != s2) { // unload the folder olMAPI.RemoveStore(fo); } } olApp.Quit();

    Read the article

  • jQuery pagination without all the rows of data before hand

    - by Aaron Mc Adam
    Hi guys, I'm trying to find a solution for paginating my search results. I have a built the links and everything works without jQuery. I've also got AJAX working for the links, but I just need a way of tidying up the list of links, as some search results return up to 60 pages and the links spread to two rows in my template. I am searching the Amazon API, and can only return 10 results at a time. Upon clicking the pagination links, the next 10 results are returned. I have access to the total results number from the XML, but not all the data at once, so I can't put all the data into a "hiddenResults" div that the jQuery Pagination plugin needs. Here is the jQuery I have for the pagination: var $pagination = $("#pagination ul"); $pagination.delegate('.pagenumbers', 'click', function() { var $$this = $(this); $$this.closest('li').addClass('cached'); $pagination.find("#currentPage").removeAttr('id').wrapInner($('<a>', {'class':'pagenumbers', 'href':'book_data.php?keywords='+keywords})); $$this.closest('li').attr('id','currentPage').html( $$this.text() ); $.ajax({ type : "GET", cache : false, url : "book_data.php", data : { keywords : keywords, page : $$this.text() }, success : show_results }); return false; }); function show_results(res) { $("#searchResults").replaceWith($(res).find('#searchResults')).find('table.sortable tbody tr:odd').addClass('odd'); detailPage(); selectForm(); if ( ! ( $pagination.find('li').hasClass('cached') ) ) { $.get("book_data.php", { keywords : keywords, page : ( $pagination.find("#currentPage").text() + 1 ) } ); } }

    Read the article

  • Open txt files from a directory, compare the values, and output the top 15 in PHP

    - by Anon
    Hello, I recently designed a referral game website for the fun of it. There's a simple MySQL user system with a email verification. It's using the UserCake user management system. On top of this i added a php page that the user could give to "victims" that when they visit it they get "infected" and can infect other users or "victims". This page uses GET to get the username from the url. I have a folder that when a user registers it creates a file with 4 digits and then the username. (ex; 0000Username.txt) All the numbers are the same, it's just so that if a user discovers the folder they won't be able to find the files. There is also a txt file in the same format with IPS in the name. (ex; 0000IPSUsername.txt) The file when visited gets the username from the url, then checks if the text file for that username exists. If the username is present in the url, and a valid user it opens the IPS file and adds the IP of the visitor, then opens the user text file, takes the value and adds one to it, and saves. At the end it makes the difference between saying "You are infected, Username has infected (amount) people." or just you have been infected. Now to what i need! I need to add a hi-scores to the website so people can compete to be the one with the most "infections". I thought i could use readdir to get a list of the files and open them with the value in an array, but i need it to also strip the username from the file name. It would be best if it just saves to a text file like "Username | value" because then i can add echo's of the html tags and have it include the file in the page i want it to be one. Many thanks in advance.

    Read the article

  • VB.NET 2008, Windows 7 and saving files

    - by James Brauman
    Hello, We have to learn VB.NET for the semester, my experience lies mainly with C# - not that this should make a difference to this particular problem. I've used just about the most simple way to save a file using the .NET framework, but Windows 7 won't let me save the file anywhere (or anywhere that I have found yet). Here is the code I am using to save a text file. Dim dialog As FolderBrowserDialog = New FolderBrowserDialog() Dim saveLocation As String = dialog.SelectedPath ... Build up output string ... Try ' Try to write the file. My.Computer.FileSystem.WriteAllText(saveLocation, output, False) Catch PermissionEx As UnauthorizedAccessException ' We do not have permissions to save in this folder. MessageBox.Show("Do not have permissions to save file to the folder specified. Please try saving somewhere different.", "Error", MessageBoxButtons.OK, MessageBoxIcon.Error) Catch Ex As Exception ' Catch any exceptions that occured when trying to write the file. MessageBox.Show("Writing the file was not successful.", "Error", MessageBoxButtons.OK, MessageBoxIcon.Error) End Try The problem is that this using this code throws an UnauthorizedAccessException no matter where I try to save the file. I've tried running the .exe file as administrator, and the IDE as administrator. Is this just Windows 7 being overprotective? And if so, what can I do to solve this problem? The requirements state that I be able to save a file! Thanks.

    Read the article

  • Updating target workbook - extracting data from source workbook

    - by Allan
    My question is as follows: I have given a workbook to multiple people. They have this workbook in a folder of their choice. The workbook name is the same for all people, but folder locations vary. Let's assume the common file name is MyData-1.xls. Now I have updated the workbook and want to give it to these people. However when they receive the new one (let's call it MyData-2.xls) I want specific parts of their data pulled from their file (MyData-1) and automatically put into the new one provided (MyData-2). The columns and cells to be copied/imported are identical for both workbooks. Let's assume I want to import cell data (values only) from MyData-1.xls, Sheet 1, cells B8 through C25 ... to ... the same location in the MyData-2.xls workbook. How can I specify in code (possibly attached to a macro driven import data now button) that I want this data brought into this new workbook. I have tried it at my own location by opening the two workbooks and using the copy/paste-special with links process. It works really well, but It seems to create a hard link between the two physical workbooks. I changed the name of the source workbook and it still worked. This makes me believe that there is a "hard link" between the tow and that this will not allow me to give the target (MyData-2.xls) workbook to others and have it find their source workbook.

    Read the article

  • Declare models elsewhere than in "models.py"

    - by sebpiq
    Hi ! I have an application that splits models into different files. Actually the folder looks like : >myapp __init__.py models.py >hooks ... ... myapp don't care about what's in the hooks, folder, except that there are models, and that they have to be declared somehow. So, I put this in myapp.__init__.py : from django.conf import settings for hook in settings.HOOKS : try : __import__(hook) except ImportError as e : print "Got import err !", e #where HOOKS = ("myapp.hooks.a_super_hook1", ...) The problem is that it doesn't work when I run syncdb(and throws some strange "Got import err !"... strange considering that it's related to another module of my program that I don't even import anywhere :/ ) ! So I tried successively : 1) for hook in settings.HOOKS : try : exec ("from %s import *" % hook) doesn't work either : syncdb doesn't install the models in hooks 2) from myapp.hooks.a_super_hook1 import * This works 3) exec("from myapp.hooks.a_super_hook1 import *") This works to So I checked that in the test 1), the statement executed is the same than in tests 2) and 3), and it is exactly the same ... Any idea ???

    Read the article

  • How do you organise multiple git repositories?

    - by dbr
    With SVN, I had a single big repository I kept on a server, and checked-out on a few machines. This was a pretty good backup system, and allowed me easily work on any of the machines. I could checkout a specific project, commit and it updated the 'master' project, or I could checkout the entire thing. Now, I have a bunch of git repositories, for various projects, several of which are on github. I also have the SVN repository I mentioned, imported via the git-svn command.. Basically, I like having all my code (not just projects, but random snippets and scripts, some things like my CV, articles I've written, websites I've made and so on) in one big repository I can easily clone onto remote machines, or memory-sticks/harddrives as backup. The problem is, since it's a private repository, and git doesn't allow checking out of a specific folder (that I could push to github as a separate project, but have the changes appear in both the master-repo, and the sub-repos) I could use the git submodule system, but it doesn't act how I want it too (submodules are pointers to other repositories, and don't really contain the actual code, so it's useless for backup) Currently I have a folder of git-repos (for example, ~/code_projects/proj1/.git/ ~/code_projects/proj2/.git/), and after doing changes to proj1 I do git push github, then I copy the files into ~/Documents/code/python/projects/proj1/ and do a single commit (instead of the numerous ones in the individual repos). Then do git push backupdrive1, git push mymemorystick etc So, the question: How do your personal code and projects with git repositories, and keep them synced and backed-up?

    Read the article

  • Projects integration question

    - by qkrsppopcmpt
    Other team has one legacy system, which is data aggregators. It is implemented as web service using JAVA, SOAP,MTOM,Tomcat and Axis2. They have wsdl files defining functionalities such as search, retrieve data, upload, download. Our team has a new developed website which is developed using RoR with mySQL. It is sort of social networking. Users can register, add friends, upload images, videos. Also, they can search data. We are required to connect the two systems. One possible solution I think is - Adding components into our website. The component invoke services on the aggregators. - Synchronize website database to the aggregators. My doubts are: 1. How to add components in our websites? The components should use Java or Ruby or adapter from java to ruby. It is possible using ruby invoke web service. I think it should work since it is the point of web service. If so, can ruby call those services in wsdl directly? But how to deal with those different data structure? How to synchroinze our database to the aggregators. I think the best way is also through web service invocation such as upload. That means, we have to export the db records into xml files and then write some tools to upload. The web service project support MTOM. So, it is fine to upload huge data. Am I on the right record? Can anybody give me some hint. Thanks.

    Read the article

  • How do I solve the .NET CF exception "Can't find PInvoke DLL"?

    - by Ignas Limanauskas
    This is to all the C# gurus. I have been banging my head on this for some time already, tried all kinds of advice on the net with no avail. The action is happening in Windows Mobile 5.0. I have a DLL named MyDll.dll. In the MyDll.h I have: extern "C" __declspec(dllexport) int MyDllFunction(int one, int two); The definition of MyDllFunction in MyDll.cpp is: int MyDllFunction(int one, int two) { return one + two; } The C# class contains the following declaration: [DllImport("MyDll.dll")] extern public static int MyDllFunction(int one, int two); In the same class I am calling MyDllFunction the following way: int res = MyDllFunction(10, 10); And this is where the bloody thing keeps giving me "Can't find PInvoke DLL 'MyDll.dll'". I have verified that I can actually do the PInvoke on system calls, such as "GetAsyncKeyState(1)", declared as: [DllImport("coredll.dll")] protected static extern short GetAsyncKeyState(int vKey); The MyDll.dll is in the same folder as the executable, and I have also tried putting it into the /Windows folder with no changes nor success. Any advice or solutions are greatly appreciated.

    Read the article

  • Copy recordset data into multiple sheets to avoid problem of maximum rows limit in Excel VBA

    - by Sam
    I am developing reporting application in Excel/vba 2003. VBA code sends search query to database and gets the data through recordset. It will then be copied to one of excel sheet. The rertrieved data looks like as shown below. ProductID--|---DateProcessed--|----State----- 1................|.. 1/1/2010..............|.....Picked Up 1................|.. 1/1/2010..............|.....Forward To Approver 1................|.. 1/2/2010..............|.....Approver Picked Up 1................|.. 1/3/2010..............|.....Approval Completed 2................|.. 1/1/2010..............|.....Picked Up 3................|.. 1/2/2010..............|.....Picked Up 3................|.. 1/2/2010..............|.....Forward To Approver The problem is data retrieved from search query is so huge that it goes above the excel row limit (65536 rows in excel 2003). So I want to split this data into two excel sheets. While spliting the data I want to ensure that the data for same product shoud remain in one sheet. For example, if the last record in the above result set is 65537th record then I also want to move all records for product 3 into new sheet. So sheet1 will contain records for product id 1 and 2 with total records = 65534. Sheet 2 will cotain records for product id 3 - with total records = 2. How can I acheive this in vba? If it is not possible, is there any alternative solution ? Thanks in Advance !

    Read the article

  • How do I detect proximity of the mouse pointer to a line in Flex?

    - by Hanno Fietz
    I'm working on a charting UI in Flex. One of the features I want to implement is "snapping" of the mousepointer to the data points in the diagram. I. e., if the user hovers the mouse pointer over a line diagram and gets close to the data point, I want the pointer to move to the exact coordinates and show a marker, like this: Currently, the lines are drawn on a Shape, using the Graphics API. The Shape is a child DisplayObject of a custom UIComponent subclass with the exact same dimensions. This means, I already get mouseOver events on the parent of the diagram's canvas. Now I need a way to detect if the pointer is close to one of the data points. I. e. I need an answer to the question "Which data points lie within a radius of x pixels from my current position and which of them is closest?" upon each move of the mouse. I can think of the following possibilities: draw the lines not as simple lines in the graphics API, but as more advanced objects that can have their own mouseOver events. However, I want the snapping to trigger before the mouse is actually over the line. check the original data for possible candidates upon each mouse movement. Using binary search, I might be able to reduce the number of items I have to compare sufficently. prepare some kind of new data structure from the raw data that makes the above search more efficient. I don't know how that would look like. I'm guessing this is a pretty standard problem for a number of applications, but probably the actual code usually is inside of some framework. Is there anything I can read about this topic?

    Read the article

  • shell script filter du and find by a string inside a file in a subfolder

    - by Jason
    I have the following command that I run on cygwin: find /cygdrive/d/tmp/* -maxdepth 0 -mtime -150 -type d | xargs du --max-depth=0 > foldersizesreport.csv I intended to do the following with this command: for each folder under /d/tmp/ that was modified in last 150 days, check its total size including files within it and report it to file foldersizesreport.csv however that is now not good enough for me, as it turns out inside each /d/tmp/subfolder1/somefile.properties /d/tmp/subfolder2/somefile.properties /d/tmp/subfolder3/somefile.properties /d/tmp/subfolder4/somefile.properties so as you see inside each subfolderX there is a file named somefile.properties inside it there is a property SOMEPROPKEY=3808612800100 (among other properties) this is the time in millisecond, i need to change the command so that instead of -mtime -150 it will include in the whole calculation only subfolderX that has a file inside them somefile.properties where the SOMEPROPKEY=3808612800100 is the time in millisecond in future, if the value SOMEPROPKEY=23948948 is in past then dont at all include the folder in the foldersizesreport.csv because its not relevant to me. so the result report should be looking like: /d/tmp/,subfolder1,<itssizein KB> /d/tmp/,subfolder2,<itssizein KB> and if subfolder3 had a SOMEPROPKEY=34243234 (time in ms in past) then it would not be in that csv file. so basically I'm looking for: find /cygdrive/d/tmp/* -maxdepth 0 -mtime -150 -type d | <only subfolders that have in them property in file SOMEPROPKEY=28374874827 - time in ms in future and not in past | xargs du --max-depth=0 > foldersizesreport.csv

    Read the article

  • Need help with PHP URL encoding/decoding

    - by Kenan
    On one page I'm "masking"/encoding URL which is passed to another page, there I decode URL and start file delivering to user. I found some function for encoding/decoding URL's, but sometime encoded URL contains "+" or "/" and decoded link is broken. I must use "folder structure" for link, can not use QueryString! Here is encoding function: $urll = 'SomeUrl.zip'; $key = '123'; $result = ''; for($i=0; $i<strlen($urll); $i++) { $char = substr($urll, $i, 1); $keychar = substr($key, ($i % strlen($key))-1, 1); $char = chr(ord($char)+ord($keychar)); $result.=$char; } $result = urlencode(base64_encode($result)); echo '<a href="/user/download/'.$result.'/">PC</a>'; Here is decoding: $urll = 'segment_3'; //Don't worry for this one its CMS retrieving 3rd "folder" $key = '123'; $resultt = ''; $string = ''; $string = base64_decode(urldecode($urll)); for($i=0; $i<strlen($string); $i++) { $char = substr($string, $i, 1); $keychar = substr($key, ($i % strlen($key))-1, 1); $char = chr(ord($char)-ord($keychar)); $resultt.=$char; } echo '<br />DEC: '. $resultt; So how to encode and decode url. Thanks

    Read the article

  • This is a great job opportunity!!! [closed]

    - by Stuart Gordon
    ASP.NET MVC Web Developer / London / £450pd / £25-£50,000pa / Interested contact [email protected] ! As a web developer within the engineering department, you will work with a team of enthusiastic developers building a new ASP.NET MVC platform for online products utilising exciting cutting edge technologies and methodologies (elements of Agile, Scrum, Lean, Kanban and XP) as well as developing new stand-alone web products that conform to W3C standards. Key Responsibilities and Objectives: Develop ASP.NET MVC websites utilising Frameworks and enterprise search technology. Develop and expand content management and delivery solutions. Help maintain and extend existing products. Formulate ideas and visions for new products and services. Be a proactive part of the development team and provide support and assistance to others when required. Qualification/Experience Required: The ideal candidate will have a web development background and be educated to degree level in a Computer Science/IT related course plus ASP.NET MVC experience. The successful candidate needs to be able to demonstrate commercial experience in all or most of the following skills: Essential: ASP.NET MVC with C# (Visual Studio), Castle, nHibernate, XHTML and JavaScript. Experience of Test Driven Development (TDD) using tools such as NUnit. Preferable: Experience of Continuous Integration (TeamCity and MSBuild), SQL Server (T-SQL), experience of source control such as Subversion (plus TortioseSVN), JQuery. Learn: Fluent NHibernate, S#arp Architecture, Spark (View engine), Behaviour Driven Design (BDD) using MSpec. Furthermore, you will possess good working knowledge of W3C web standards, web usability, web accessibility and understand the basics of search engine optimisation (SEO). You will also be a quick learner, have good communication skills and be a self-motivated and organised individual.

    Read the article

  • How can I send rich emails using the user's mail client ?

    - by Brann
    I need my .net program to send rich emails (usually containing table data, around 20 columns x 10 rows) using the user's mail infrastructure, allowing him to review/edit the mail before sending it, and storing the mail in his 'sent items' folder. mailto: seems the obvious choice, but unfortunately, it doesn't support neither attachments nor html bodies. It seems some clients support some extra features (e.g. Outlook 97 used to support a &Attach tag, but this is not the case for more recent versions). I could use mailto and try to format the text body to look nice (using tabs, etc), but this isn't really elegant and wouldn't support huge data. using automation seems a very huge task, as I would need to automate dozens of clients (4 or 5 versions of outlook, lotusnotes, thunderbid, etc.) ... This would be a huge task and it's not really my core business ... I could send emails through code and write my own mail form to let the user edit the mail, but this would have a lot of drawbacks : the user would need to manually configure the mail server settings he wouldn't have access to his contact directory the mail wouldn't be sent in his sent items folder This seems a quite common issue, but I haven't found any satisfying solution yet ; does someone knows of a library supporting this (ie containing automation logic for most mainstream email clients?). Or an alternative to mailto ?

    Read the article

  • Resizing FileReference image then reuploading, only reuploads original image.

    - by pfunc
    I can;t figure out how to do this. Someone selects and image after calling FileReference.browse(). I take that image and make a thumbnail in flash. Then I upload that image like so: var newFileReq:URLRequest = new URLRequest(FILE_UPLOAD_TEMP); newFileReq.contentType = "application/octet-stream"; var fileReqVars:URLVariables = new URLVariables(); fileReqVars.image = myThumbImage; fileReqVars.folder = "Thumbs"; newFileReq.data = fileReqVars; newFileReq.method = URLRequestMethod.POST; //upload the first image fileRef.addEventListener(Event.COMPLETE, onFirstFileUp); fileRef.upload(newFileReq, "Filedata"); All this does it upload the original image. How do I change the fileRef to upload the new thumb? I have traced out the size of the "myThumbImage" and it is correct. I have placed it visually on the stage after creating the thumb, and it seems like it works. But when I upload it to an aspx page (that basically just throws it into a folder), it uploads the original larger image.

    Read the article

  • Rails paginate existing array of ActiveRecord results

    - by SaoiseK
    Hello, I generally use will_paginate for the pagination in my app, but have hit a stumbler on my search feature. I'm using Thinking Sphinx for doing my full-text search, which returns results paginated. The problem I'm having is that after I've received the results from Thinking Sphinx, I need to merge them with some other results and re-order them. Once I've finished processing them I have an Array of results that is very different from the original from TS. As there could be 1000+ results in this Array Pagination is a necessity. The problem is that I can't figure out how to get will_paginate to play with an existing array. I've done some research and it seems the only solutions to this problem are from several years ago and are based around the old built-in Paginator class. The most recent one I could find that makes use of will_paginate was from devchix from mid-2007: http://www.devchix.com/2007/07/23/will_paginate-array/comment-page-1/ - I've given this a go but it doesn't seem to do anything for me. Are there any current methods for applying pagination (preferably via will_paginate) for existing arrays of AR results?

    Read the article

  • Short names versus long names in Windows

    - by normski
    I have some code which gets the short name from a file path, using GetShortNameW(), and then later retrieves the long name view GetLongNameA(). The original file is of the form "C:/ProgramData/My Folder/File.ext" However, following conversion to short, then back to long, the filename becomes "C:/Program Files/My Folder/Filename.ext". The short name is of the form "C:/PROGRA~2/MY_FOL~1/FIL~1.EXT" The short name is being incorrectly resolved. The code compiles using VS 2005 on Windows 7 (I cannot upgrade the project to VS2008) Does anybody have any idea why this might be happening? DWORD pathLengthNeeded = ::GetShortPathNameW(aRef->GetFilePath().c_str(), NULL, 0); if(pathLengthNeeded != 0) { WCHAR* shortPath = new WCHAR[pathLengthNeeded]; DWORD newPathNameLength = ::GetShortPathNameW(aRef->GetFilePath().c_str(), shortPath, pathLengthNeeded); if(newPathNameLength != 0) { UI_STRING unicodePath(shortPath); std::string asciiPath = StringFromUserString(unicodePath); pathLengthNeeded = ::GetLongPathNameA(asciiPath.c_str(),NULL, 0); if(pathLengthNeeded != 0) {// convert it back to a long path if possible. For goodness sake can't we use Unicode throughout?F char* longPath = new char[pathLengthNeeded]; DWORD newPathNameLength = ::GetLongPathNameA(asciiPath.c_str(), longPath, pathLengthNeeded); if(newPathNameLength != 0) { std::string longPathString(longPath, newPathNameLength); asciiPath = longPathString; } delete [] longPath; } SetFullPathName(asciiPath); } delete [] shortPath; }

    Read the article

  • Sort command not working as expected

    - by user964689
    If anybody can help me to write a loop to iterate over files in a folder it would save me a huge amount of time. I think it must be quite a simple solution ,but I currently don't know how to nest a loop within a loop. So far I have this script: cd /folderlocation/ for i in `</textfile_containing_lines_to_iterate_through` do #size=`echo $i | perl -nE '/:([\d-]+)/ && say abs(eval $1)'` #echo "$size" zcat dataset | head -n 18 > temp"$i".vcf tabix dataset $i >> temp"$i".vcf vcftools --window-pi 1000000 --vcf temp10individuals"$i".vcf >> run_summary.txt cat out.windowed.pi >> outputfile_2 #rm temp* done grep -v "PI" outputfile_2 > outputfile rm outputfile_2 I need to expand this so that the script will run multiple times, through all of the 'textfiles_containing_lines_to_iterate_through'. Currently I change the name of the textfile manually each time and re-run the script. So I'd need a loop that does this for file in folder, and also that uses the name of the file as part of the outputfile name so that I can match an output file to an inputfile. Any help would be really useful and greatly appreciated! Many thanks in advance.

    Read the article

  • How do I loop through elements inside a div?

    - by crosenblum
    I have to make a custom function for search/replace text, because firefox counts text nodes differently than IE, Google Chrome, etc.. I am trying to use this code, that I saw at Firefox WhiteSpace Issue since in my other function, I am looping numerically through nodes, which serves my functional needs perfectly, in other browsers. But refuses to work, as part of a search/replace function that takes place after some ajax content is loaded. Here is the code, that I have tried to get to work, but I must be missing the correct understanding of the context of how to loop thru elements inside a div. // get all childnodes inside div function div_translate(divid) { // list child nodes of parent if (divid != null) { // var children = parent.childNodes, child; var parentNode = divid; // start loop thru child nodes for(var node=parentNode.firstChild;node!=null;node=node.nextSibling){ // begin check nodeType if(node.nodeType == 1){ // get value of this node var value = content(node); // get class of this node var myclass = node.attr('class'); console.log(myclass); // begin check if value undefined if (typeof(value) != 'undefined' && value != null) { console.log(value); // it is a text node. do magic. for (var x = en_count; x > 0; x--) { // get current english phrase var from = en_lang[x]; // get current other language phrase var to = other_lang[x]; if (value.match(from)) { content(node, value.replace(from, to)); } } } // end check if value undefined } // end check nodeType } // end loop thru child nodes } }

    Read the article

  • MS SQL - Multi-Column substring matching

    - by hamlin11
    One of my clients is hooked on multi-column substring matching. I understand that Contains and FreeText search for words (and at least in the case of Contains, word prefixes). However, based upon my understanding of this MSDN book, neither of these nor their variants are capable of searching substrings. I have used LIKE rather extensively (Select * from A where A.B Like '%substr%') Sample table A: ID | Col1 | Col2 | Col3 | ------------------------------------- 1 | oklahoma | colorado | Utah | 2 | arkansas | colorado | oklahoma | 3 | florida | michigan | florida | ------------------------------------- The following code will give us row 1 and row 2: select * from A where Col1 like '%klah%' or Col2 like '%klah%' or Col3 like '%klah%' This is rather ugly, probably slow, and I just don't like it very much. Probably because the implementations that I'm dealing with have 10+ columns that need searched. The following may be a slight improvement as code readability goes, but as far as performance, we're still in the same ball park. select * from A where (Col1 + ' ' + Col2 + ' ' + Col3) like '%klah%' I have thought about simply adding insert, update, and delete triggers that simply add the concatenated version of the above columns into a separate table that shadows this table. Sample Shadow_Table: ID | searchtext | --------------------------------- 1 | oklahoma colorado Utah | 2 | arkansas colorado oklahoma | 3 | florida michigan florida | --------------------------------- This would allow us to perform the following query to search for '%klah%' select * from Shadow_Table where searchtext like '%klah%' I really don't like having to remember that this shadow table exists and that I'm supposed to use it when I am performing multi-column substring matching, but it probably yields pretty quick reads at the expense of write and storage space. My gut feeling tells me there there is an existing solution built into SQL Server 2008. However, I don't seem to be able to find anything other than research papers on the subject. Any help would be appreciated.

    Read the article

< Previous Page | 574 575 576 577 578 579 580 581 582 583 584 585  | Next Page >