Search Results

Search found 1909 results on 77 pages for 'gif'.

Page 15/77 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • How to get unique value in jquery?

    - by jquerier
    I am learning jquery. I have following chunk of code in a html file: <table width="100%"> <tr> <td align='center'> <div> <a id='get_this' href='#'> <input type='hidden' id='id' value='1'><img src='images/1.gif'></a> </div> </td> <td align='center'> <div> <a id='get_this' href='#'> <input type='hidden' id='id' value='2'><img src='images/2.gif'></a> </div> </td> <td align='center'> <div> <a id='get_this' href='#'> <input type='hidden' id='id' value='3'><img src='images/3.gif'></a> </div> </td> </tr> What I want to do is, when I click any of the image, I can get the value, so that I can display the information. For example, I click the id=1, then I will display information on id1 in somewhere else. I tried this: $("a#get_this").click(function(){ var id = $('input[type=hidden]#id').val(); window.alert("You have chosen the id: " + id); }); It always return id: 1 to me.

    Read the article

  • PNG's In Mac on Java cause massive CPU usage

    - by alexganose
    Hey, I've been having this problem for a while now and I was hoping someone could help. I make small games using Java on Mac OSX 1.6.3 and if I use PNG's as the image format my CPU usage by Java skyrockets to say 50% (on a very small 2D game). However if I use GIF as the format my CPU usage by Java stays constant at 10% which is reasonable. What is causing this problem?? It occurs on every game I develop using PNG's so I always just switch to GIF's. The problem is now that i need to use a PNG for its variable alpha properties rather than just plain transparency. This is not available using GIF's. The problem is present on Java SE 6 and previous versions. I am using an early 2009 Mac Book Pro 15". The problem does not occur on a Windows PC running the same game. The CPU usage due to Java using PNG's on a Windows PC (I have tried XP, Vista and 7) is always constantly low at ~10%. Any help would be greatly appreciated. Thanks :)

    Read the article

  • <span> containing 3 overlapping images has 3x the necessary width

    - by Nathan Parrish
    Hi guys, I have a element, containing three overlapping images. Inspecting the element in Chrome shows this: <span id=?"span1">? <img id=?"img1" src=?"images/?progressbar.gif" width=?"120" style=?"position:? relative;? z-index:? 3;?">? <img id=?"img2" src=?"images/?progressbar.gif" style=?"width:? 120px;? height:? 12px;?? position:? relative;? left:? -120px;? z-index:? 2;?">? <img id=?"img3" src=?"images/?progressbar.gif" style=?"width:? 120px;? height:? 12px;? position:? relative;? left:? -240px;? z-index:? 1;?">? </span>? The important point is that the second two images are given a relative position, shifting them to the left so they perfectly overlap the first. But the span itself is still 360 pixels wide (3 x 120 pixels per image). So how can I achieve this effect while keeping the span width tightly bounded around the images? Thanks!

    Read the article

  • Trying to create a group of button sprites

    - by user1449653
    Good day, I have like 15 images I need to be buttons. I have buttons working with a Box() (Box - looks like this) class Box(pygame.sprite.Sprite): def __init__(self): pygame.sprite.Sprite.__init__(self) self.image = pygame.Surface((35, 30)) self.image = self.image.convert() self.image.fill((255, 0, 0)) self.rect = self.image.get_rect() self.rect.centerx = 25 self.rect.centery = 505 self.dx = 10 self.dy = 10 I am trying to make the buttons work with image sprites. So I attempted to copy the class style of the box and do the same for my Icons.. code looks like this... class Icons(pygame.sprite.Sprite): def __init__(self): pygame.sprite.Sprite.__init__(self) self.image = pygame.image.load("images/airbrushIC.gif").convert() self.rect = self.image.get_rect() self.rect.x = 25 self.rect.y = 550 the code in the main() rect = image.get_rect() rect.x = 25 rect.y = 550 ic1 = Icons((screen.get_rect().x, screen.get_rect().y)) screen.blit(ic1.image, ic1.rect) pygame.display.update() This code produces a positional (accepts 1 argument but 2 are there) error or an image is not referenced error (inside the Icon class). I'm unsure if this is the right way to go about this anyways.. I know for sure that I need to load all the images (as sprites)... store them in an array... and then have my mouse check if it is clicking one of the items in the array using a for loop. Thanks. EDIT QUESTION 2: class Icons(pygame.sprite.Sprite): def init(self, *args): pygame.sprite.Sprite.init(self, *args) self.image = pygame.image.load("images/airbrushIC.gif").convert() self.rect = self.image.get_rect() ic1 = self.image self.rect.x = 10 self.rect.y = 490 self.image = pygame.image.load("images/fillIC.gif").convert() self.rect = self.image.get_rect() ic2 = self.image self.rect.x = 10 self.rect.y = 540 Thanks to your help I got the Icons class loading ONE image. Its not loading both. Obviously because its being overwritten by the second one. It seems that "class" for this purpose isn't what I need. Which begs the question how I make sprites outside of a class.. If there is a way to make the class work please let me know.

    Read the article

  • Nginx location issue

    - by dave
    I'm trying to set a longer (30 day) 'expires' header for my (images only) in the /misc-stuff/ directory. This is what I'm using for my site : # Serve static files directly from nginx location ~* \.(jpg|jpeg|gif|png|bmp|ico|pdf|flv|swf|exe|html|htm|txt|css|js) { add_header Cache-Control public; add_header Cache-Control must-revalidate; expires 7d; } I want to be able to keep that code in to handle regular site images, but create a new block to handle the /misc-stuff/ directory. I have tried : location ^~ /misc-stuff/ { ... } The problem I'm having now is that my backup .php files in that directory show up as plain text if someone tries to access it. How do I set it up so ONLY .gif images in the /misc-stuff/ directory are effected?

    Read the article

  • Where / how does Apache generate the HTML code used in the default directory listing?

    - by Ellen B
    I am looking to modify the HTML that apache generates for its default directory listing. I already know how to create a HEADER.html file that gets included for every directory listing. I am attempting to change the actual html that Apache generates for the file listing itself; right now my MacOS apache generates this for example: <table><tr><th><img src="/icons/blank.gif" alt="[ICO]"></th><th><a href="?C=N;O=D">Name</a></th><th><a href="?C=M;O=A">Last modified</a></th><th><a href="?C=S;O=A">Size</a></th><th><a href="?C=D;O=A">Description</a></th></tr><tr><th colspan="5"><hr></th></tr> <tr><td valign="top"><img src="/icons/folder.gif" alt="[DIR]"></td><td><a href="ios-prototype/">ios-prototype/</a> </td><td align="right">07-Dec-2012 16:47 </td><td align="right"> - </td><td>&nbsp;</td></tr> <tr><td valign="top"><img src="/icons/folder.gif" alt="[DIR]"></td><td><a href="magneto-git/">magneto-git/</a> </td><td align="right">07-Dec-2012 16:46 </td><td align="right"> - </td><td>&nbsp;</td></tr> <tr><th colspan="5"><hr></th></tr> </table> I want a different HTML structure (like, say, an OL) generated when my server spits back directory listings. (FYI I'm doing a bunch of mobile browser prototyping with my local webserver & need to make it not totally horrible to browse with fingers to the right test directory — the table structure sucks, and while I can mod a lot of it with CSS it's still going to be ganky.)

    Read the article

  • selective backup script in bash

    - by Sake
    Hi, I've been using this simple command (that's all I can do :) to backup the whole tree from my user data in NAS server for a year. cp -r /STORAGE /BACKUP-STORAGE/YYYY-MM-DD Unfortunately, after a year of service. My user start filling the spaces with lot of photo and cliparts (jpg, gif, bmp) And that start to make my backup process get much slower. The space is also a big issue. Now I no longer have enough space for a week-long daily backup set. I think I want to change from backup everything to backup only non-image data. How can I exclude jpg, gif, and bmp from the backup ? It's quite easy with DOS XCOPY command, but I really have no idea how to do that in bash. Thanks

    Read the article

  • outlook signature problem

    - by user20989
    i have created signature for my windows mail signature html file has following code. <table border="0" cellspacing="0" cellpadding="0"> <tr> <td>Regards</td> </tr> <tr> <td><img src="http://images.google.com/intl/en_ALL/images/logos/images_logo_lg.gif" align="Signature Picture" /></td> </tr> </table> its working fine once i send message, but when if i foreword send item again i got signature but image not display. once i check the source of email i found following change in signature image automatically like <IMG height=128 alt="Compnay Logo" src="mhtml:{4B829C94-37FC-44B9-A60C-CC4BB1E0AE9B}mid://00000152/!http://images.google.com/intl/en_ALL/images/logos/images_logo_lg.gif" width=206 border=0> how to fix it. or any other way to put image in signature (not image is also hosted on web server) Thanks

    Read the article

  • 550 operation not permitted using FTP

    - by monkey_boys
    I'm using FTP to manage some files on a site I run but keep seeing this (truncated) error log: Command: DELE calendarpermission.php Response: 550 calendarpermission.php: Operation not permitted [...] Command: DELE button_down.gif Response: 550 button_down.gif: Operation not permitted Command: CWD /domains/example.com/public_html/admincp Response: 250 CWD command successful Command: PWD Response: 257 "/domains/example.com/public_html/admincp" is the current directory Command: RMD control_examples Response: 550 control_examples: Operation not permitted Command: CWD /domains/example.com/public_html Response: 250 CWD command successful Command: PWD Response: 257 "/domains/example.com/public_html" is the current directory Command: RMD admincp Response: 550 admincp: Operation not permitted Status: Retrieving directory listing... Command: PASV Response: 227 Entering Passive Mode (122,155,5,50,138,244). Command: MLSD Response: 150 Opening ASCII mode data connection for MLSD Response: 226 Transfer complete Status: Directory listing successful Status: Set permissions of '/domains/example.com/public_html/admincp' to '777' Command: SITE CHMOD 777 admincp Response: 550 CHMOD 777 admincp: Operation not permitted What do I do to solve this?

    Read the article

  • Nginx, proxy passing to Apache, and SSL

    - by Vic
    I have Nginx and Apache set up with Nginx proxy-passing everything to Apache except static resources. I have a server set up for port 80 like so: server { listen 80; server_name *.example1.com *.example2.com; [...] location ~* \.(?:ico|css|js|gif|jpe?g|png|pdf|te?xt)$ { access_log off; expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; add_header Vary: Accept-Encoding; } location / { proxy_pass http://127.0.0.1:8080; include /etc/nginx/conf.d/proxy.conf; } } And since we have multiple ssl sites (with different ssl certificates) I have a server{} block for each of them like so: server { listen 443 ssl; server_name *.example1.com; [...] location ~* \.(?:ico|css|js|gif|jpe?g|png|pdf|te?xt)$ { access_log off; expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; add_header Vary: Accept-Encoding; } location / { proxy_pass https://127.0.0.1:8443; include /etc/nginx/conf.d/proxy.conf; proxy_set_header X-Forwarded-Port 443; proxy_set_header X-Forwarded-Proto https; } } server { listen 443 ssl; server_name *.example2.com; [...] location ~* \.(?:ico|css|js|gif|jpe?g|png|pdf|te?xt)$ { access_log off; expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; add_header Vary: Accept-Encoding; } location / { proxy_pass https://127.0.0.1:8445; include /etc/nginx/conf.d/proxy.conf; proxy_set_header X-Forwarded-Port 443; proxy_set_header X-Forwarded-Proto https; } } First of all, I think there is a very obvious problem here, which is that I'm double-encrypting everything, first at the nginx level and then again by Apache. To make everything worse, I just started using Amazon's Elastic Load Balancer, so I added the certificate to the ELB and now SSL encryption is happening three times. That's gotta be horrible for performance. What is the sane way to handle this? Should I be forwarding https on the ELB - http on nginx - http on apache? Secondly, there is so much duplication above. Is the best method to not repeat myself to put all of the static asset handling in an include file and just include it in the server?

    Read the article

  • How do I use a URL path instead of a file path in an Open File dialog in Mac OSX or ChromiumOS?

    - by Chris
    In Windows 7 (and perhaps earlier), the default "Open File" dialog box allows you to type a full URL into the "File name" section as if it were a file path, e.g. "http://www.example.com/pic.gif" instead of "C:/windows/pictures/pic.gif". When uploading a file to a website on the client side - say, an image - this allows the client to upload a picture located on a server accessible via the URL instead of downloading the image, saving it locally, then referencing the local image in the "Open File" dialog. It's a great option for Windows users. I have three separate questions: What is this procedure formally called? How do I describe this succinctly so that my searches for more information are fruitful? Can something similar be done in Mac OSX, Chromium OS, or a Linux environment? If so, how? Thanks!

    Read the article

  • Imagemagick/File upload abuse causing my memory errors

    - by kidcapital
    I had been running out of memory on my server lately and I noticed some individuals uploading the same "file" over and over in quick succession which locks up my instance of mini_magick. Eventually the morgify gets stuck in an infinite look. I've taken care of it by having a daemon watch the morgify process if it get's out of control, but was wondering if there was a better solution You can see the same *.gif being uploading in quick succession. I tried downloading this file too, and it isn't even a gif. I don't know what it is (I can't open it). Anyone experience this kind of exploit before?

    Read the article

  • AuthSub token from Google/YouTube API is always returned as invalid

    - by Miriam Raphael Roberts
    Anyone out there have experience with the YouTube/Google API? I am trying to login to Google/Youtube using clientLogin, retrieve an AuthSub token, exchange it for a multi-session token and then use it in our upload form. Just a note that we are not going to have other users logging into our (secure) website, this is for our use only (no multi-users). We just want a way to upload videos to our YT account via our own website without having to login/upload to YouTube. Ultimately, everything is dependent on the first step. My AuthSub token is always being returned as invalid (Error '403'). All the steps I used are below with username/password changed. Anyone have an insight on why my AuthSub is always invalid? I am spending an enormous amount of time trying to get this to work. STEP 1: Getting the authsub token from Youtube/Google POST /youtube/accounts/ClientLogin HTTP/1.1 User-Agent: curl/7.10.6 (i386-redhat-linux-gnu) libcurl/7.10.6 OpenSSL/0.9.7a ipv6 zlib/1.1.4 Host: www.google.com Pragma: no-cache Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */* Content-Type:application/x-www-form-urlencoded Content-Length: 86 Email=MyGoogleUsername&Passwd=MyGooglePasswd&accountType=GOOGLE&service=youtube&source=Test RESPONSE RECEIVED: Auth=AIwbFAR99f3iACfkT-5PXCB-1tN4vlyP_1CiNZ8JOn6P-......yv4d4zeGRemNm4il1e-M6czgfDXAR0w9fQ YouTubeUser=MyYouTubeUsername CURL COMMAND USED: /usr/bin/curl -S -v --location https://www.google.com/youtube/accounts/ClientLogin --data Email=MyGoogleUsername&Passwd=MyGooglePasswd&accountType=GOOGLE&service=youtube&source=Test --header Content-Type:application/x-www-form-urlencoded STEP 2: Exchanging the AuthSub token for a multi-use token GET /accounts/AuthSubSessionToken HTTP/1.1 User-Agent: curl/7.10.6 (i386-redhat-linux-gnu) libcurl/7.10.6 OpenSSL/0.9.7a ipv6 zlib/1.1.4 Host: www.google.com Pragma: no-cache Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */* Content-Type:application/x-www-form-urlencoded Authorization: AuthSub token="AIwbFASiRR3XDKs......p5Oy_VA_9U2yV1enxJoVGSgMlZqTcjKw9mS861vlc9GWTH9D9sQ" Response received: 403 Invalid AuthSub token. curl command used: /usr/bin/curl -S -v --location https://www.google.com/accounts/AuthSubSessionToken --header Content-Type:application/x-www-form-urlencoded -H Authorization: AuthSub token="AIwbFAQR_4xG2g.....vp3BQZW5XEMyIj_wFozHSTEQ-BQRfYuIY-1CyqLeQ" STEP 3: Checking to see if the token is good/valid GET /accounts/AuthSubTokenInfo HTTP/1.1 User-Agent: curl/7.10.6 (i386-redhat-linux-gnu) libcurl/7.10.6 OpenSSL/0.9.7a ipv6 zlib/1.1.4 Host: www.google.com Pragma: no-cache Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */* Content-Type:application/x-www-form-urlencoded Authorization: AuthSub token="AIwbFASiRR3XDKsNkaIoPaujN5RQhKs3u.....A_9U2yV1enxJoVGSgMlZqTcjKw9mS861vlc9GWTH9D9sQ" Received response: 403 Invalid AuthSub token. curl command used: /usr/bin/curl -S -v --location https://www.google.com/accounts/AuthSubTokenInfo --header Content-Type:application/x-www-form-urlencoded -H Authorization: AuthSub token="AIwbFAQR_4xG2gHoAKDsNdFqdZdwWjGeNquOLpvp3BQZW5XEMyIj_wFozHSTEQ-BQRfYuIY-1CyqLeQ" STEP 4: Trying to get the upload token using the authsub token POST /action/GetUploadToken HTTP/1.1 User-Agent: curl/7.10.6 (i386-redhat-linux-gnu) libcurl/7.10.6 OpenSSL/0.9.7a ipv6 zlib/1.1.4 Host: gdata.youtube.com Pragma: no-cache Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */* Content-Type:application/atom+xml Authorization: AuthSub token="AIwbFASiRR3XDKsNkaIoPaujN5RQhp5Oy_VA_9U2yV1enxJoVGSgMlZqTcjKw9mS861vlc9GWTH9D9sQ" X-Gdata-Key:key="AI39si5EQyo-TZPFAnmGjxJGFKpxd_7a6hEERh_3......R82AShoQ" Content-Length:0 GData-Version:2 Recevied Response: 401 Token invalid - Invalid AuthSub token. Curl command used: /usr/bin/curl -S -v --location http://gdata.youtube.com/action/GetUploadToken -H Content-Type:application/atom+xml -H Authorization: AuthSub token="AIwbFASiRR3XDKs....sYDp5Oy_VA_9U2yV1enxJoVGSgMlZqTcjKw9mS861vlc9GWTH9D9sQ" -H X-Gdata-Key:key="AI39si5EQyo-TZPFAnmGjxJGF......Kpxd6dN2J1oHFQYTj_7a6hEERh_3E48R82AShoQ" -H Content-Length:0 -H GData-Version:2

    Read the article

  • A couple of pattern matching issues with pattern matching in Lua

    - by Josh
    I'm fairly new to lua programming, but I'm also a quick study. I've been working on a weather forecaster for a program that I use, and it's working well, for the most part. Here is what I have so far. (Pay no attention to the zs.stuff. That is program specific and has no bearing on the lua coding.) if not http then http = require("socket.http") end local locale = string.gsub(zs.params(1),"%s+","%%20") local page = http.request("http://www.wunderground.com/cgi-bin/findweather/getForecast?query=" .. locale .. "&wuSelect=WEATHER") local location = string.match(page,'title="([%w%s,]+) RSS"') --print("Gathering weather information for " .. location .. ".") --local windspeed = string.match(page,'<span class="nobr"><span class="b">([%d.]+)</span>&nbsp;mph</span>') --print(windspeed) local condition = string.match(page, '<td class="vaM taC"><img src="http://icons-ecast.wxug.com/i/c/a/[%w_]+.gif" width="42" height="42" alt="[%w%s]+" class="condIcon" />') --local image = string.match(page, '<img src="http://icons-ecast.wxug.com/i/c/a/(.+).gif" width="42" height="42" alt="[%w%s]+" class="condIcon" />') local temperature = string.match(page,'pwsvariable="tempf" english="&deg;F" metric="&deg;C" value="([%d.]+)">') local humidity = string.match(page,'pwsvariable="humidity" english="" metric="" value="(%d+)"') zs.say(location) --zs.say("image ./Images/" .. image .. ".gif") zs.say("<color limegreen>Condition:</color> <color white>" .. condition .. "</color>") zs.say("<color limegreen>Temperature: </color><color white>" .. temperature .. "F</color>") zs.say("<color limegreen>Humidity: </color><color white>" .. humidity .. "%</color>") My main issue is this: I changed the 'condition' and added the 'image' variables to what they are now. Even though the line it's supposed to be matching comes directly from the webpage, it fails to match at all. So I'm wondering what it is I'm missing that's preventing this code from working. If I take out the <td class="vaM taC">< img src="http://icons-ecast.wxug.com/i/c/a/[%w_]+.gif" it'll match condition flawlessly. (For whatever reason, I can't get the above line to display correctly, but there is no space between the `< and img) Can anyone point out what is wrong with it? Aside from the pattern matching, I assure you the line is verbatim from the webpage. Another question I had is the ability to match across line breaks. Is there any possible way to do this? The reason why I ask is because on that same page, a few of the things I need to match are broken up on separate lines, and since the actual pattern I'm wanting to match shows up in other places on the page, I need to be able to match across line breaks to get the exact pattern. I appreciate any help in this matter!

    Read the article

  • LUA: A couple of pattern matching issues

    - by Josh
    I'm fairly new to lua programming, but I'm also a quick study. I've been working on a weather forecaster for a program that I use, and it's working well, for the most part. Here is what I have so far. (Pay no attention to the zs.stuff. That is program specific and has no bearing on the lua coding.) if not http then http = require("socket.http") end local locale = string.gsub(zs.params(1),"%s+","%%20") local page = http.request("http://www.wunderground.com/cgi-bin/findweather/getForecast?query=" .. locale .. "&wuSelect=WEATHER") local location = string.match(page,'title="([%w%s,]+) RSS"') --print("Gathering weather information for " .. location .. ".") --local windspeed = string.match(page,'<span class="nobr"><span class="b">([%d.]+)</span>&nbsp;mph</span>') --print(windspeed) local condition = string.match(page, '<td class="vaM taC"><img src="http://icons-ecast.wxug.com/i/c/a/[%w_]+.gif" width="42" height="42" alt="[%w%s]+" class="condIcon" />') --local image = string.match(page, '<img src="http://icons-ecast.wxug.com/i/c/a/(.+).gif" width="42" height="42" alt="[%w%s]+" class="condIcon" />') local temperature = string.match(page,'pwsvariable="tempf" english="&deg;F" metric="&deg;C" value="([%d.]+)">') local humidity = string.match(page,'pwsvariable="humidity" english="" metric="" value="(%d+)"') zs.say(location) --zs.say("image ./Images/" .. image .. ".gif") zs.say("<color limegreen>Condition:</color> <color white>" .. condition .. "</color>") zs.say("<color limegreen>Temperature: </color><color white>" .. temperature .. "F</color>") zs.say("<color limegreen>Humidity: </color><color white>" .. humidity .. "%</color>") My main issue is this: I changed the 'condition' and added the 'image' variables to what they are now. Even though the line it's supposed to be matching comes directly from the webpage, it fails to match at all. So I'm wondering what it is I'm missing that's preventing this code from working. If I take out the <td class="vaM taC">< img src="http://icons-ecast.wxug.com/i/c/a/[%w_]+.gif" it'll match condition flawlessly. (For whatever reason, I can't get the above line to display correctly, but there is no space between the `< and img) Can anyone point out what is wrong with it? Aside from the pattern matching, I assure you the line is verbatim from the webpage. Another question I had is the ability to match across line breaks. Is there any possible way to do this? The reason why I ask is because on that same page, a few of the things I need to match are broken up on separate lines, and since the actual pattern I'm wanting to match shows up in other places on the page, I need to be able to match across line breaks to get the exact pattern. I appreciate any help in this matter!

    Read the article

  • Animated background image in a hidden <div> doesn't load or loads not animated

    - by Guanche
    Hello, I have spent the whole day trying to make a script which on "submit" hides the form and shows hidden with animated progress bar. The problem is that Internet Explorer doesn't show animated gif images in hidden divs. The images are static. I visited many websites and found a script which uses: document.getElementById(id).style.backgroundImage = 'url(/images/load.gif)'; Finally, my script works in Internet Explorer, Firefox, Opera but... Google Chrome doesn't display the image at all. I see only div text. After many tests I discovered the following: the only way to see the background image in Google Chrome is to include the same image somewhere in the page (outside of hidden div) with 1px dimensions: <img src="/images/load.gif" width="1" heigh="1" /> This did the trick but... after this dirty solution Microsoft Explorer for some reason shows the image as static again. So, my question is: is there any way how to force Gogle Chrome to show the image? Thanks. This is my script: <script language="JavaScript" type="text/javascript"> function ver (id, elementId){ if (document.getElementById('espera').style.visibility == "visible") { return false; }else{ var esplit = document.forms[0]['userfile'].value.split("."); ext = esplit[esplit.length-1]; if (document.forms[0]['userfile'].value == '') { alert('Please select a file'); return false; }else{ if ((ext.toLowerCase() == 'jpg')) { document.getElementById(id).style.position = 'absolute'; document.getElementById(id).style.display = 'inline'; document.getElementById(id).style.visibility = "visible"; document.getElementById(id).style.backgroundImage = 'url(/images/load.gif)'; document.getElementById(id).style.height = "100px"; document.getElementById(id).style.backgroundColor = '#f3f3f3'; document.getElementById(id).style.backgroundRepeat = "no-repeat"; document.getElementById(id).style.backgroundPosition = "50% 50%"; var element; if (document.all) element = document.all[elementId]; else if (document.getElementById) element = document.getElementById(elementId); if (element && element.style) element.style.display = 'none'; return true; }else{ alert('This is not a jpg file'); return false; } } } } </script> <div id="frmDiv"> <form enctype="multipart/form-data" action="/upload.php" method="post" name="upload3" onsubmit="return ver('espera','frmDiv');"> <input type="hidden" name="max_file_size" value="4194304" /> <table border="0" cellspacing="1" cellpadding="2" width="100%"> <tr bgcolor="#f5f5f5"> <td>File (jpg)</td> <td> <input type="file" name="userfile" class="upf" /></td></tr> <tr bgcolor="#f5f5f5"> <td>&nbsp;</td> <td> <input class="upf2" type="submit" name="add" value="Upload" /> </td></tr></table></form> </div> <div id="espera" style="display:none;text-align:center;float:left;width:753px;">&nbsp;<br />&nbsp;<br />&nbsp;<br />&nbsp;<br /> &nbsp;<br />Please wait...<br />&nbsp; </div>

    Read the article

  • SharePoint Feature suggestion

    - by barathan
    I have written a feature(Site scoped) that adds custom menu items to the New Menu and EditControlBlock of document library. These menu items should show up only when the user has add and edit permissions for that document library. If he selected the menu, url is redirected to my webpart. Webpart is deployed in site collection. To do this i have two way. I mentioned in as case 1 & case 2. But in the both cases i failed to fulfill my requirement Below are the sample entries in Feature and Element manifest file I am passing the current location to sourceurl in order to get the folder url <?xml version="1.0" encoding="utf-8" ?> <Feature Id="59bba8e7-0cfc-46e3-9285-4597f8085e76" Title="My Custom Menus" Scope="Site" xmlns="http://schemas.microsoft.com/sharepoint/"> <ElementManifests> <ElementManifest Location="Elements.xml" /> </ElementManifests></Feature> Case 1: <Elements xmlns="http://schemas.microsoft.com/sharepoint/"> <CustomAction Id="EditMenu1" RegistrationType="FileType" RegistrationId="txt" Location="EditControlBlock" Sequence="106" ImageUrl="/_layouts/images/PPT16.GIF" Title="My Edit Menu" Rights="AddListItems,EditListItems"> <UrlAction Url="javascript:var surl='{SiteUrl}'; window.location='/test/mypage.aspx?siteurl='+surl+'&amp;itemurl={ItemUrl}&amp;itemid={ItemId}&amp;listid={ListId}&amp;Source='+window.location" /> </CustomAction> <CustomAction Id="NewMenu1" GroupId="NewMenu" RegistrationType="List" RegistrationId="101" Location="Microsoft.SharePoint.StandardMenu" Sequence="1002" ImageUrl ="/_layouts/images/DOC32.GIF" Title="My New Menu" Rights="AddListItems,EditListItems"> <UrlAction Url="javascript:var surl='{SiteUrl}'; window.location='/test/mypage.aspx?siteurl='+surl+'&amp;listid={ListId}&amp;Source='+window.location" /> </CustomAction> </Elements> If i use the above code, it was not redirected to site collection instead of it is redirecting to rootsite. Is there is any way to get the site collection variable. To overcome this issue i used the following code: Case 2: <?xml version="1.0" encoding="utf-8" ?> <Elements xmlns="http://schemas.microsoft.com/sharepoint/"> <CustomAction Id="EditMenu1" RegistrationType="FileType" RegistrationId="txt" Location="EditControlBlock" Sequence="106" ImageUrl="/_layouts/images/PPT16.GIF" Title="My Edit Menu" Rights="AddListItems,EditListItems"> <UrlAction Url="~sitecollection/test/mypage.aspx?siteurl={SiteUrl}&amp;itemurl={ItemUrl}&amp;itemid={ItemId}&amp;listid={ListId}&amp;Source=/" /> </CustomAction> <CustomAction Id="NewMenu1" GroupId="NewMenu" RegistrationType="List" RegistrationId="101" Location="Microsoft.SharePoint.StandardMenu" Sequence="1002" ImageUrl ="/_layouts/images/DOC32.GIF" Title="My New Menu" Rights="AddListItems,EditListItems"> <UrlAction Url="~sitecollection/test/mypage.aspx?siteurl={SiteUrl}&amp;listid={ListId}&amp;Source=/" /> </CustomAction> </Elements> But in this case, it is correctly redirected to the site collection. But it fails to get the folder url because current location can't pass through in this case. while creating new document. Could you please suggest me either how to get the site collection url in the case 1 or how to pass the current location to the sourceul in case 2

    Read the article

  • Smooth Div Scroll jquery not scrolling

    - by Razor
    The Smooth Div Scroll is great but for some reason the area no longer scrolls when I edit or remove the #makeMeScrollable or #makeMeScrollable div.scrollableArea * When I leave it as is it works. Which is a problem for customization. and it won't work after I take the "*" out of div.scrollableArea * If I edit the part with the It's been frustrating figuring out why that part which is supposed to be editable not work at all. Any help with this jquery would be helpful! Thanks in advance! /* You can alter this CSS in order to give SmoothDivScroll your own look'n'feel */ /* Invisible left hotspot */ div.scrollingHotSpotLeft { /* The hotspots have a minimum width of 75 pixels and if there is room the will grow and occupy 10% of the scrollable area (20% combined). Adjust it to your own taste. */ min-width: 75px; width: 10%; height: 100%; /* There is a big background image and it's used to solve some problems I experienced in Internet Explorer 6. */ background-image: url(../images/big_transparent.gif); background-repeat: repeat; background-position: center center; position: absolute; z-index: 200; left: 0; /* The first cursor url is for Firefox and other browsers, the second is for Internet Explorer */ cursor: url(../images/cursors/cursor_arrow_left.cur), url(images/cursors/cursor_arrow_left.cur),w-resize; } /* Visible left hotspot */ div.scrollingHotSpotLeftVisible { background-image: url(../images/arrow_left.gif); background-color: #fff; background-repeat: no-repeat; /* Standard CSS3 opacity setting */ opacity: 0.35; /* Opacity for really old versions of Mozilla Firefox (0.9 or older) */ -moz-opacity: 0.35; /* Opacity for Internet Explorer. */ filter: alpha(opacity = 35); /* Use zoom to Trigger "hasLayout" in Internet Explorer 6 or older versions */ zoom: 1; } /* Invisible right hotspot */ div.scrollingHotSpotRight { min-width: 75px; width: 10%; height: 100%; background-image: url(../images/big_transparent.gif); background-repeat: repeat; background-position: center center; position: absolute; z-index: 200; right: 0; cursor: url(../images/cursors/cursor_arrow_right.cur), url(images/cursors/cursor_arrow_right.cur),e-resize; } /* Visible right hotspot */ div.scrollingHotSpotRightVisible { background-image: url(../images/arrow_right.gif); background-color: #fff; background-repeat: no-repeat; opacity: 0.35; filter: alpha(opacity = 35); -moz-opacity: 0.35; zoom: 1; } /* The scroll wrapper is always the same width and height as the containing element (div). Overflow is hidden because you don't want to show all of the scrollable area. */ div.scrollWrapper { position: relative; overflow: hidden; width: 100%; height: 100%; } div.scrollableArea { position: relative; width: auto; height: 100%; } #makeMeScrollable { width:100%; height: 330px; position: relative; } #makeMeScrollable div.scrollableArea * { position: relative; float: left; margin: 0; padding: 0; } http://www.smoothdivscroll.com/ //^above link to the jquery I am talking about

    Read the article

  • Making Sense of ASP.NET Paths

    - by Rick Strahl
    ASP.Net includes quite a plethora of properties to retrieve path information about the current request, control and application. There's a ton of information available about paths on the Request object, some of it appearing to overlap and some of it buried several levels down, and it can be confusing to find just the right path that you are looking for. To keep things straight I thought it a good idea to summarize the path options along with descriptions and example paths. I wrote a post about this a long time ago in 2004 and I find myself frequently going back to that page to quickly figure out which path I’m looking for in processing the current URL. Apparently a lot of people must be doing the same, because the original post is the second most visited even to this date on this blog to the tune of nearly 500 hits per day. So, I decided to update and expand a bit on the original post with a little more information and clarification based on the original comments. Request Object Paths Available Here's a list of the Path related properties on the Request object (and the Page object). Assume a path like http://www.west-wind.com/webstore/admin/paths.aspx for the paths below where webstore is the name of the virtual. .blackborder td { border-bottom: solid 1px silver; border-left: solid 1px silver; } Request Property Description and Value ApplicationPath Returns the web root-relative logical path to the virtual root of this app. /webstore/ PhysicalApplicationPath Returns local file system path of the virtual root for this app. c:\inetpub\wwwroot\webstore PhysicalPath Returns the local file system path to the current script or path. c:\inetpub\wwwroot\webstore\admin\paths.aspx Path FilePath CurrentExecutionFilePath All of these return the full root relative logical path to the script page including path and scriptname. CurrentExcecutionFilePath will return the ‘current’ request path after a Transfer/Execute call while FilePath will always return the original request’s path. /webstore/admin/paths.aspx AppRelativeCurrentExecutionFilePath Returns an ASP.NET root relative virtual path to the script or path for the current request. If in  a Transfer/Execute call the transferred Path is returned. ~/admin/paths.aspx PathInfo Returns any extra path following the script name. If no extra path is provided returns the root-relative path (returns text in red below). string.Empty if no PathInfo is available. /webstore/admin/paths.aspx/ExtraPathInfo RawUrl Returns the full root relative URL including querystring and extra path as a string. /webstore/admin/paths.aspx?sku=wwhelp40 Url Returns a fully qualified URL including querystring and extra path. Note this is a Uri instance rather than string. http://www.west-wind.com/webstore/admin/paths.aspx?sku=wwhelp40 UrlReferrer The fully qualified URL of the page that sent the request. This is also a Uri instance and this value is null if the page was directly accessed by typing into the address bar or using an HttpClient based Referrer client Http header. http://www.west-wind.com/webstore/default.aspx?Info Control.TemplateSourceDirectory Returns the logical path to the folder of the page, master or user control on which it is called. This is useful if you need to know the path only to a Page or control from within the control. For non-file controls this returns the Page path. /webstore/admin/ As you can see there’s a ton of information available there for each of the three common path formats: Physical Path is an OS type path that points to a path or file on disk. Logical Path is a Web path that is relative to the Web server’s root. It includes the virtual plus the application relative path. ~/ (Root-relative) Path is an ASP.NET specific path that includes ~/ to indicate the virtual root Web path. ASP.NET can convert virtual paths into either logical paths using Control.ResolveUrl(), or physical paths using Server.MapPath(). Root relative paths are useful for specifying portable URLs that don’t rely on relative directory structures and very useful from within control or component code. You should be able to get any necessary format from ASP.NET from just about any path or script using these mechanisms. ~/ Root Relative Paths and ResolveUrl() and ResolveClientUrl() ASP.NET supports root-relative virtual path syntax in most of its URL properties in Web Forms. So you can easily specify a root relative path in a control rather than a location relative path: <asp:Image runat="server" ID="imgHelp" ImageUrl="~/images/help.gif" /> ASP.NET internally resolves this URL by using ResolveUrl("~/images/help.gif") to arrive at the root-relative URL of /webstore/images/help.gif which uses the Request.ApplicationPath as the basepath to replace the ~. By convention any custom Web controls also should use ResolveUrl() on URL properties to provide the same functionality. In your own code you can use Page.ResolveUrl() or Control.ResolveUrl() to accomplish the same thing: string imgPath = this.ResolveUrl("~/images/help.gif"); imgHelp.ImageUrl = imgPath; Unfortunately ResolveUrl() is limited to WebForm pages, so if you’re in an HttpHandler or Module it’s not available. ASP.NET Mvc also has it’s own more generic version of ResolveUrl in Url.Decode: <script src="<%= Url.Content("~/scripts/new.js") %>" type="text/javascript"></script> which is part of the UrlHelper class. In ASP.NET MVC the above sort of syntax is actually even more crucial than in WebForms due to the fact that views are not referencing specific pages but rather are often path based which can lead to various variations on how a particular view is referenced. In a Module or Handler code Control.ResolveUrl() unfortunately is not available which in retrospect seems like an odd design choice – URL resolution really should happen on a Request basis not as part of the Page framework. Luckily you can also rely on the static VirtualPathUtility class: string path = VirtualPathUtility.ToAbsolute("~/admin/paths.aspx"); VirtualPathUtility also many other quite useful methods for dealing with paths and converting between the various kinds of paths supported. One thing to watch out for is that ToAbsolute() will throw an exception if a query string is provided and doesn’t work on fully qualified URLs. I wrote about this topic with a custom solution that works fully qualified URLs and query strings here (check comments for some interesting discussions too). Similar to ResolveUrl() is ResolveClientUrl() which creates a fully qualified HTTP path that includes the protocol and domain name. It’s rare that this full resolution is needed but can be useful in some scenarios. Mapping Virtual Paths to Physical Paths with Server.MapPath() If you need to map root relative or current folder relative URLs to physical URLs or you can use HttpContext.Current.Server.MapPath(). Inside of a Page you can do the following: string physicalPath = Server.MapPath("~/scripts/ww.jquery.js")); MapPath is pretty flexible and it understands both ASP.NET style virtual paths as well as plain relative paths, so the following also works. string physicalPath = Server.MapPath("scripts/silverlight.js"); as well as dot relative syntax: string physicalPath = Server.MapPath("../scripts/jquery.js"); Once you have the physical path you can perform standard System.IO Path and File operations on the file. Remember with physical paths and IO or copy operations you need to make sure you have permissions to access files and folders based on the Web server user account that is active (NETWORK SERVICE, ASPNET typically). Note the Server.MapPath will not map up beyond the virtual root of the application for security reasons. Server and Host Information Between these settings you can get all the information you may need to figure out where you are at and to build new Url if necessary. If you need to build a URL completely from scratch you can get access to information about the server you are accessing: Server Variable Function and Example SERVER_NAME The of the domain or IP Address wwww.west-wind.com or 127.0.0.1 SERVER_PORT The port that the request runs under. 80 SERVER_PORT_SECURE Determines whether https: was used. 0 or 1 APPL_MD_PATH ADSI DirectoryServices path to the virtual root directory. Note that LM typically doesn’t work for ADSI access so you should replace that with LOCALHOST or the machine’s NetBios name. /LM/W3SVC/1/ROOT/webstore Request.Url and Uri Parsing If you still need more control over the current request URL or  you need to create new URLs from an existing one, the current Request.Url Uri property offers a lot of control. Using the Uri class and UriBuilder makes it easy to retrieve parts of a URL and create new URLs based on existing URL. The UriBuilder class is the preferred way to create URLs – much preferable over creating URIs via string concatenation. Uri Property Function Scheme The URL scheme or protocol prefix. http or https Port The port if specifically specified. DnsSafeHost The domain name or local host NetBios machine name www.west-wind.com or rasnote LocalPath The full path of the URL including script name and extra PathInfo. /webstore/admin/paths.aspx Query The query string if any ?id=1 The Uri class itself is great for retrieving Uri parts, but most of the properties are read only if you need to modify a URL in order to change it you can use the UriBuilder class to load up an existing URL and modify it to create a new one. Here are a few common operations I’ve needed to do to get specific URLs: Convert the Request URL to an SSL/HTTPS link For example to take the current request URL and converted  it to a secure URL can be done like this: UriBuilder build = new UriBuilder(Request.Url); build.Scheme = "https"; build.Port = -1; // don't inject port Uri newUri = build.Uri; string newUrl = build.ToString(); Retrieve the fully qualified URL without a QueryString AFAIK, there’s no native routine to retrieve the current request URL without the query string. It’s easy to do with UriBuilder however: UriBuilder builder = newUriBuilder(Request.Url); builder.Query = ""; stringlogicalPathWithoutQuery = builder.ToString(); What else? I took a look through the old post’s comments and addressed as many of the questions and comments that came up in there. With a few small and silly exceptions this update post handles most of these. But I’m sure there are a more things that go in here. What else would be useful to put onto this post so it serves as a nice all in one place to go for path references? If you think of something leave a comment and I’ll try to update the post with it in the future.© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Recover Data Like a Forensics Expert Using an Ubuntu Live CD

    - by Trevor Bekolay
    There are lots of utilities to recover deleted files, but what if you can’t boot up your computer, or the whole drive has been formatted? We’ll show you some tools that will dig deep and recover the most elusive deleted files, or even whole hard drive partitions. We’ve shown you simple ways to recover accidentally deleted files, even a simple method that can be done from an Ubuntu Live CD, but for hard disks that have been heavily corrupted, those methods aren’t going to cut it. In this article, we’ll examine four tools that can recover data from the most messed up hard drives, regardless of whether they were formatted for a Windows, Linux, or Mac computer, or even if the partition table is wiped out entirely. Note: These tools cannot recover data that has been overwritten on a hard disk. Whether a deleted file has been overwritten depends on many factors – the quicker you realize that you want to recover a file, the more likely you will be able to do so. Our setup To show these tools, we’ve set up a small 1 GB hard drive, with half of the space partitioned as ext2, a file system used in Linux, and half the space partitioned as FAT32, a file system used in older Windows systems. We stored ten random pictures on each hard drive. We then wiped the partition table from the hard drive by deleting the partitions in GParted. Is our data lost forever? Installing the tools All of the tools we’re going to use are in Ubuntu’s universe repository. To enable the repository, open Synaptic Package Manager by clicking on System in the top-left, then Administration > Synaptic Package Manager. Click on Settings > Repositories and add a check in the box labelled “Community-maintained Open Source software (universe)”. Click Close, and then in the main Synaptic Package Manager window, click the Reload button. Once the package list has reloaded, and the search index rebuilt, search for and mark for installation one or all of the following packages: testdisk, foremost, and scalpel. Testdisk includes TestDisk, which can recover lost partitions and repair boot sectors, and PhotoRec, which can recover many different types of files from tons of different file systems. Foremost, originally developed by the US Air Force Office of Special Investigations, recovers files based on their headers and other internal structures. Foremost operates on hard drives or drive image files generated by various tools. Finally, scalpel performs the same functions as foremost, but is focused on enhanced performance and lower memory usage. Scalpel may run better if you have an older machine with less RAM. Recover hard drive partitions If you can’t mount your hard drive, then its partition table might be corrupted. Before you start trying to recover your important files, it may be possible to recover one or more partitions on your drive, recovering all of your files with one step. Testdisk is the tool for the job. Start it by opening a terminal (Applications > Accessories > Terminal) and typing in: sudo testdisk If you’d like, you can create a log file, though it won’t affect how much data you recover. Once you make your choice, you’re greeted with a list of the storage media on your machine. You should be able to identify the hard drive you want to recover partitions from by its size and label. TestDisk asks you select the type of partition table to search for. In most cases (ext2/3, NTFS, FAT32, etc.) you should select Intel and press Enter. Highlight Analyse and press enter. In our case, our small hard drive has previously been formatted as NTFS. Amazingly, TestDisk finds this partition, though it is unable to recover it. It also finds the two partitions we just deleted. We are able to change their attributes, or add more partitions, but we’ll just recover them by pressing Enter. If TestDisk hasn’t found all of your partitions, you can try doing a deeper search by selecting that option with the left and right arrow keys. We only had these two partitions, so we’ll recover them by selecting Write and pressing Enter. Testdisk informs us that we will have to reboot. Note: If your Ubuntu Live CD is not persistent, then when you reboot you will have to reinstall any tools that you installed earlier. After restarting, both of our partitions are back to their original states, pictures and all. Recover files of certain types For the following examples, we deleted the 10 pictures from both partitions and then reformatted them. PhotoRec Of the three tools we’ll show, PhotoRec is the most user-friendly, despite being a console-based utility. To start recovering files, open a terminal (Applications > Accessories > Terminal) and type in: sudo photorec To begin, you are asked to select a storage device to search. You should be able to identify the right device by its size and label. Select the right device, and then hit Enter. PhotoRec asks you select the type of partition to search. In most cases (ext2/3, NTFS, FAT, etc.) you should select Intel and press Enter. You are given a list of the partitions on your selected hard drive. If you want to recover all of the files on a partition, then select Search and hit enter. However, this process can be very slow, and in our case we only want to search for pictures files, so instead we use the right arrow key to select File Opt and press Enter. PhotoRec can recover many different types of files, and deselecting each one would take a long time. Instead, we press “s” to clear all of the selections, and then find the appropriate file types – jpg, gif, and png – and select them by pressing the right arrow key. Once we’ve selected these three, we press “b” to save these selections. Press enter to return to the list of hard drive partitions. We want to search both of our partitions, so we highlight “No partition” and “Search” and then press Enter. PhotoRec prompts for a location to store the recovered files. If you have a different healthy hard drive, then we recommend storing the recovered files there. Since we’re not recovering very much, we’ll store it on the Ubuntu Live CD’s desktop. Note: Do not recover files to the hard drive you’re recovering from. PhotoRec is able to recover the 20 pictures from the partitions on our hard drive! A quick look in the recup_dir.1 directory that it creates confirms that PhotoRec has recovered all of our pictures, save for the file names. Foremost Foremost is a command-line program with no interactive interface like PhotoRec, but offers a number of command-line options to get as much data out of your had drive as possible. For a full list of options that can be tweaked via the command line, open up a terminal (Applications > Accessories > Terminal) and type in: foremost –h In our case, the command line options that we are going to use are: -t, a comma-separated list of types of files to search for. In our case, this is “jpeg,png,gif”. -v, enabling verbose-mode, giving us more information about what foremost is doing. -o, the output folder to store recovered files in. In our case, we created a directory called “foremost” on the desktop. -i, the input that will be searched for files. This can be a disk image in several different formats; however, we will use a hard disk, /dev/sda. Our foremost invocation is: sudo foremost –t jpeg,png,gif –o foremost –v –i /dev/sda Your invocation will differ depending on what you’re searching for and where you’re searching for it. Foremost is able to recover 17 of the 20 files stored on the hard drive. Looking at the files, we can confirm that these files were recovered relatively well, though we can see some errors in the thumbnail for 00622449.jpg. Part of this may be due to the ext2 filesystem. Foremost recommends using the –d command-line option for Linux file systems like ext2. We’ll run foremost again, adding the –d command-line option to our foremost invocation: sudo foremost –t jpeg,png,gif –d –o foremost –v –i /dev/sda This time, foremost is able to recover all 20 images! A final look at the pictures reveals that the pictures were recovered with no problems. Scalpel Scalpel is another powerful program that, like Foremost, is heavily configurable. Unlike Foremost, Scalpel requires you to edit a configuration file before attempting any data recovery. Any text editor will do, but we’ll use gedit to change the configuration file. In a terminal window (Applications > Accessories > Terminal), type in: sudo gedit /etc/scalpel/scalpel.conf scalpel.conf contains information about a number of different file types. Scroll through this file and uncomment lines that start with a file type that you want to recover (i.e. remove the “#” character at the start of those lines). Save the file and close it. Return to the terminal window. Scalpel also has a ton of command-line options that can help you search quickly and effectively; however, we’ll just define the input device (/dev/sda) and the output folder (a folder called “scalpel” that we created on the desktop). Our invocation is: sudo scalpel /dev/sda –o scalpel Scalpel is able to recover 18 of our 20 files. A quick look at the files scalpel recovered reveals that most of our files were recovered successfully, though there were some problems (e.g. 00000012.jpg). Conclusion In our quick toy example, TestDisk was able to recover two deleted partitions, and PhotoRec and Foremost were able to recover all 20 deleted images. Scalpel recovered most of the files, but it’s very likely that playing with the command-line options for scalpel would have enabled us to recover all 20 images. These tools are lifesavers when something goes wrong with your hard drive. If your data is on the hard drive somewhere, then one of these tools will track it down! Similar Articles Productive Geek Tips Recover Deleted Files on an NTFS Hard Drive from a Ubuntu Live CDUse an Ubuntu Live CD to Securely Wipe Your PC’s Hard DriveReset Your Ubuntu Password Easily from the Live CDBackup Your Windows Live Writer SettingsAdding extra Repositories on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job? Find Downloads and Add-ins for Outlook Recycle !

    Read the article

  • Making Sense of ASP.NET Paths

    - by Renso
    Making Sense of ASP.NET Paths ASP.Net includes quite a plethora of properties to retrieve path information about the current request, control and application. There's a ton of information available about paths on the Request object, some of it appearing to overlap and some of it buried several levels down, and it can be confusing to find just the right path that you are looking for. To keep things straight I thought it a good idea to summarize the path options along with descriptions and example paths. I wrote a post about this a long time ago in 2004 and I find myself frequently going back to that page to quickly figure out which path I’m looking for in processing the current URL. Apparently a lot of people must be doing the same, because the original post is the second most visited even to this date on this blog to the tune of nearly 500 hits per day. So, I decided to update and expand a bit on the original post with a little more information and clarification based on the original comments. Request Object Paths Available Here's a list of the Path related properties on the Request object (and the Page object). Assume a path like http://www.west-wind.com/webstore/admin/paths.aspx for the paths below where webstore is the name of the virtual. Request Property Description and Value ApplicationPath Returns the web root-relative logical path to the virtual root of this app. /webstore/ PhysicalApplicationPath Returns local file system path of the virtual root for this app. c:\inetpub\wwwroot\webstore PhysicalPath Returns the local file system path to the current script or path. c:\inetpub\wwwroot\webstore\admin\paths.aspx Path FilePath CurrentExecutionFilePath All of these return the full root relative logical path to the script page including path and scriptname. CurrentExcecutionFilePath will return the ‘current’ request path after a Transfer/Execute call while FilePath will always return the original request’s path. /webstore/admin/paths.aspx AppRelativeCurrentExecutionFilePath Returns an ASP.NET root relative virtual path to the script or path for the current request. If in  a Transfer/Execute call the transferred Path is returned. ~/admin/paths.aspx PathInfo Returns any extra path following the script name. If no extra path is provided returns the root-relative path (returns text in red below). string.Empty if no PathInfo is available. /webstore/admin/paths.aspx/ExtraPathInfo RawUrl Returns the full root relative URL including querystring and extra path as a string. /webstore/admin/paths.aspx?sku=wwhelp40 Url Returns a fully qualified URL including querystring and extra path. Note this is a Uri instance rather than string. http://www.west-wind.com/webstore/admin/paths.aspx?sku=wwhelp40 UrlReferrer The fully qualified URL of the page that sent the request. This is also a Uri instance and this value is null if the page was directly accessed by typing into the address bar or using an HttpClient based Referrer client Http header. http://www.west-wind.com/webstore/default.aspx?Info Control.TemplateSourceDirectory Returns the logical path to the folder of the page, master or user control on which it is called. This is useful if you need to know the path only to a Page or control from within the control. For non-file controls this returns the Page path. /webstore/admin/ As you can see there’s a ton of information available there for each of the three common path formats: Physical Path is an OS type path that points to a path or file on disk. Logical Path is a Web path that is relative to the Web server’s root. It includes the virtual plus the application relative path. ~/ (Root-relative) Path is an ASP.NET specific path that includes ~/ to indicate the virtual root Web path. ASP.NET can convert virtual paths into either logical paths using Control.ResolveUrl(), or physical paths using Server.MapPath(). Root relative paths are useful for specifying portable URLs that don’t rely on relative directory structures and very useful from within control or component code. You should be able to get any necessary format from ASP.NET from just about any path or script using these mechanisms. ~/ Root Relative Paths and ResolveUrl() and ResolveClientUrl() ASP.NET supports root-relative virtual path syntax in most of its URL properties in Web Forms. So you can easily specify a root relative path in a control rather than a location relative path: <asp:Image runat="server" ID="imgHelp" ImageUrl="~/images/help.gif" /> ASP.NET internally resolves this URL by using ResolveUrl("~/images/help.gif") to arrive at the root-relative URL of /webstore/images/help.gif which uses the Request.ApplicationPath as the basepath to replace the ~. By convention any custom Web controls also should use ResolveUrl() on URL properties to provide the same functionality. In your own code you can use Page.ResolveUrl() or Control.ResolveUrl() to accomplish the same thing: string imgPath = this.ResolveUrl("~/images/help.gif"); imgHelp.ImageUrl = imgPath; Unfortunately ResolveUrl() is limited to WebForm pages, so if you’re in an HttpHandler or Module it’s not available. ASP.NET Mvc also has it’s own more generic version of ResolveUrl in Url.Decode: <script src="<%= Url.Content("~/scripts/new.js") %>" type="text/javascript"></script> which is part of the UrlHelper class. In ASP.NET MVC the above sort of syntax is actually even more crucial than in WebForms due to the fact that views are not referencing specific pages but rather are often path based which can lead to various variations on how a particular view is referenced. In a Module or Handler code Control.ResolveUrl() unfortunately is not available which in retrospect seems like an odd design choice – URL resolution really should happen on a Request basis not as part of the Page framework. Luckily you can also rely on the static VirtualPathUtility class: string path = VirtualPathUtility.ToAbsolute("~/admin/paths.aspx"); VirtualPathUtility also many other quite useful methods for dealing with paths and converting between the various kinds of paths supported. One thing to watch out for is that ToAbsolute() will throw an exception if a query string is provided and doesn’t work on fully qualified URLs. I wrote about this topic with a custom solution that works fully qualified URLs and query strings here (check comments for some interesting discussions too). Similar to ResolveUrl() is ResolveClientUrl() which creates a fully qualified HTTP path that includes the protocol and domain name. It’s rare that this full resolution is needed but can be useful in some scenarios. Mapping Virtual Paths to Physical Paths with Server.MapPath() If you need to map root relative or current folder relative URLs to physical URLs or you can use HttpContext.Current.Server.MapPath(). Inside of a Page you can do the following: string physicalPath = Server.MapPath("~/scripts/ww.jquery.js")); MapPath is pretty flexible and it understands both ASP.NET style virtual paths as well as plain relative paths, so the following also works. string physicalPath = Server.MapPath("scripts/silverlight.js"); as well as dot relative syntax: string physicalPath = Server.MapPath("../scripts/jquery.js"); Once you have the physical path you can perform standard System.IO Path and File operations on the file. Remember with physical paths and IO or copy operations you need to make sure you have permissions to access files and folders based on the Web server user account that is active (NETWORK SERVICE, ASPNET typically). Note the Server.MapPath will not map up beyond the virtual root of the application for security reasons. Server and Host Information Between these settings you can get all the information you may need to figure out where you are at and to build new Url if necessary. If you need to build a URL completely from scratch you can get access to information about the server you are accessing: Server Variable Function and Example SERVER_NAME The of the domain or IP Address wwww.west-wind.com or 127.0.0.1 SERVER_PORT The port that the request runs under. 80 SERVER_PORT_SECURE Determines whether https: was used. 0 or 1 APPL_MD_PATH ADSI DirectoryServices path to the virtual root directory. Note that LM typically doesn’t work for ADSI access so you should replace that with LOCALHOST or the machine’s NetBios name. /LM/W3SVC/1/ROOT/webstore Request.Url and Uri Parsing If you still need more control over the current request URL or  you need to create new URLs from an existing one, the current Request.Url Uri property offers a lot of control. Using the Uri class and UriBuilder makes it easy to retrieve parts of a URL and create new URLs based on existing URL. The UriBuilder class is the preferred way to create URLs – much preferable over creating URIs via string concatenation. Uri Property Function Scheme The URL scheme or protocol prefix. http or https Port The port if specifically specified. DnsSafeHost The domain name or local host NetBios machine name www.west-wind.com or rasnote LocalPath The full path of the URL including script name and extra PathInfo. /webstore/admin/paths.aspx Query The query string if any ?id=1 The Uri class itself is great for retrieving Uri parts, but most of the properties are read only if you need to modify a URL in order to change it you can use the UriBuilder class to load up an existing URL and modify it to create a new one. Here are a few common operations I’ve needed to do to get specific URLs: Convert the Request URL to an SSL/HTTPS link For example to take the current request URL and converted  it to a secure URL can be done like this: UriBuilder build = new UriBuilder(Request.Url); build.Scheme = "https"; build.Port = -1; // don't inject portUri newUri = build.Uri; string newUrl = build.ToString(); Retrieve the fully qualified URL without a QueryString AFAIK, there’s no native routine to retrieve the current request URL without the query string. It’s easy to do with UriBuilder however: UriBuilder builder = newUriBuilder(Request.Url); builder.Query = ""; stringlogicalPathWithoutQuery = builder.ToString();

    Read the article

  • python: os.system does not execute shell comand

    - by capoluca
    I need to execute shell command in python program (I have ubuntu). More specifically I want to create graph using graphviz in python script. My code is os.system("dot -Tpng graph.dot -o graph.png") It does not work, but if I just type dot -Tpng graph.dot -o graph.png in command line then everything is fine. Do you know what the problem? Thank you! Edit: Does not work means that nothing happens, there are no errors. Output from dot -v -Tpng graph.dot -o graph.png: dot - graphviz version 2.26.3 (20100126.1600) Activated plugin library: libgvplugin_pango.so.6 Using textlayout: textlayout:cairo Activated plugin library: libgvplugin_dot_layout.so.6 Using layout: dot:dot_layout Using render: cairo:cairo Using device: png:cairo:cairo The plugin configuration file: /usr/lib/graphviz/config6 was successfully loaded. render : cairo dot fig gd map ps svg tk vml vrml xdot layout : circo dot fdp neato nop nop1 nop2 osage patchwork sfdp twopi textlayout : textlayout device : canon cmap cmapx cmapx_np dot eps fig gd gd2 gif gv imap imap_np ismap jpe jpeg jpg pdf plain plain-ext png ps ps2 svg svgz tk vml vmlz vrml wbmp x11 xdot xlib loadimage : (lib) eps gd gd2 gif jpe jpeg jpg png ps svg

    Read the article

  • OBIEE 11.1.1 - How to enable HTTP compression and caching in Oracle iPlanet Web Server

    - by Ahmed Awan
    1. To implement HTTP compression / caching, install and configure Oracle iPlanet Web Server 7.0.x for the bi_serverN Managed Servers (refer to document http://docs.oracle.com/cd/E23943_01/web.1111/e16435/iplanet.htm) 2. On the Oracle iPlanet Web Server machine, open the file Administrator's Configuration (obj.conf) for editing. (Guidelines for modifying the obj.conf file is available at http://download.oracle.com/docs/cd/E19146-01/821-1827/821-1827.pdf) 3. Add the following lines in obj.conf file inside <Object name="default"> . </Object> and restart the Oracle iPlanet Web Server machine: #HTTP Caching <If $path =~ '^(.*)\.(jpg|jpeg|gif|png|css|js)$'> ObjectType fn="set-variable" insert-srvhdrs="Expires:$(httpdate($time + 864000))" </If>   <If $path =~ '^(.*)\.(jpg|jpeg|gif|png|css|js)$'> PathCheck fn="set-cache-control" control="public,max-age=864000" </If>   #HTTP Compression   Output fn="insert-filter" filter="http-compression" vary="false" compression-level="9" fragment_size="8096"

    Read the article

  • Simple Grouping With TableSorter Plugin

    - by HurnsMobile
    Im playing around with the Tablesorter plug-in for jQuery and was trying to get a very simple grouping functionality added into it. Using the follow html/js works great until you click sort again and reverse the order. The headers get moved to the bottom of the group when this happens. The following is my (admitedly hacky) attempt at it. Does anyone have any ideas? <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <title>Table Manipulation Test</title> <link type="text/css" href="css/ui-lightness/jquery-ui-1.8.1.custom.css" rel="stylesheet" /> <link rel="stylesheet" href="tablesorter/themes/green/style.css" type="text/css"/> <script type="text/javascript" src="js/jquery-1.4.2.min.js"></script> <script type="text/javascript" src="js/jquery-ui-1.8.1.custom.min.js"></script> <script type="text/javascript" src="tablesorter/jquery.tablesorter.min.js"></script> <script type="text/javascript"> $(document).ready(function() { $("#test_table").tablesorter({ sortForce: [[3,0]] }); $(".group_details").hide(); $(".group_header").click(function(){ var group = $(this).attr("group"); var $expander = $(this).children("td.expanderguy") if ($("." + group + ":visible").length){ $("." + group + "").fadeOut('fast'); $expander.html("<img src='icons/plus.gif'>"); } else{ $("." + group + "").fadeIn('fast'); $expander.html("<img src='icons/minus.gif'>"); } }); } ); </script> <style type="text/css"> .group_header td{ background-color: #888888; !important } </style> </head> <body> <table id="test_table" class="tablesorter"> <thead> <tr><th>First Name</th><th>Last Name</th><th>Email</th><th>Due Date</th><th>Amount Due</th></tr> </thead> <tbody> <tr class="group_header" group="group1"><td class="expanderguy"><img src="icons/plus.gif"></td><td></td><td></td><td>Monday, June 7</td><td></td></tr> <tr class="group_details group1"><td>Flavian</td><td>Wenceslas</td><td>[email protected]</td><td>Monday, June 7</td><td>$100</td></tr> <tr class="group_details group1"><td>Gordian</td><td>Ives</td><td>[email protected]</td><td>Monday, June 7</td><td>$1700</td></tr> <tr class="group_details group1"><td>Saladin</td><td>Tarquin</td><td>[email protected]</td><td>Monday, June 7</td><td>$1700</td></tr> <tr class="group_details group1"><td>Urban</td><td>Cyprian</td><td>[email protected]</td><td>Monday, June 7</td><td>$1500</td></tr> <tr class="group_details group1"><td>Sargon</td><td>Swithun</td><td>[email protected]</td><td>Monday, June 7</td><td>$1100</td></tr> <tr class="group_details group1"><td>Pompey</td><td>Ladislas</td><td>[email protected]</td><td>Monday, June 7</td><td>$300</td></tr> <tr class="group_details group1"><td>Attila</td><td>Hiawatha</td><td>[email protected]</td><td>Monday, June 7</td><td>$200</td></tr> <tr class="group_header" group="group2"><td class="expanderguy"><img src="icons/plus.gif"></td><td></td><td></td><td>Tuesday, June 8</td><td></td></tr> <tr class="group_details group2"><td>Bruce</td><td>Fenton</td><td>[email protected]</td><td>Tuesday, June 8</td><td>$1700</td></tr> <tr class="group_details group2"><td>Wade</td><td>Sequoia</td><td>[email protected]</td><td>Tuesday, June 8</td><td>$1400</td></tr> <tr class="group_details group2"><td>Eddie</td><td>Jerold</td><td>[email protected]</td><td>Tuesday, June 8</td><td>$1100</td></tr> <tr class="group_details group2"><td>Lynn</td><td>Lucan</td><td>[email protected]</td><td>Tuesday, June 8</td><td>$1200</td></tr> <tr class="group_details group2"><td>Taegan</td><td>Tadg</td><td>[email protected]</td><td>Tuesday, June 8</td><td>$100</td></tr> <tr class="group_details group2"><td>Clyde</td><td>Reed</td><td>[email protected]</td><td>Tuesday, June 8</td><td>$6100</td></tr> <tr class="group_details group2"><td>Alaois</td><td>Art</td><td>[email protected]</td><td>Tuesday, June 8</td><td>$2100</td></tr> <tr class="group_details group2"><td>Gilbert</td><td>Patsy</td><td>[email protected]</td><td>Tuesday, June 8</td><td>$1500</td></tr> <tr class="group_header" group="group3"><td class="expanderguy"><img src="icons/plus.gif"></td><td></td><td></td><td>Wednesday, June 9</td><td></td></tr> <tr class="group_details group3" ><td>Clem</td><td>Eben</td><td>[email protected]</td><td>Wednesday, June 9</td><td>$2100</td></tr> <tr class="group_details group3" ><td>Elijah</td><td>Julyan</td><td>[email protected]</td><td>Wednesday, June 9</td><td>$2100</td></tr> <tr class="group_details group3" ><td>Marvyn</td><td>Damian</td><td>[email protected]</td><td>Wednesday, June 9</td><td>$1100</td></tr> <tr class="group_details group3" ><td>Sawyer</td><td>Ryker</td><td>[email protected]</td><td>Wednesday, June 9</td><td>$500</td></tr> </tbody> </table> </body>

    Read the article

  • how to generate tinymce to ajax generated textarea

    - by Jai_pans
    Hi, i have a image multi-uloader script which also each item uploaded was preview 1st b4 it submitted and each images has its following textarea which are also generated by javascript and my problem is i want to use the tinymce editor to each textarea generated by the ajax. Any help will be appreciated.. here is my script function fileQueueError(file, errorCode, message) { try { var imageName = "error.gif"; var errorName = ""; if (errorCode === SWFUpload.errorCode_QUEUE_LIMIT_EXCEEDED) { errorName = "You have attempted to queue too many files."; } if (errorName !== "") { alert(errorName); return; } switch (errorCode) { case SWFUpload.QUEUE_ERROR.ZERO_BYTE_FILE: imageName = "zerobyte.gif"; break; case SWFUpload.QUEUE_ERROR.FILE_EXCEEDS_SIZE_LIMIT: imageName = "toobig.gif"; break; case SWFUpload.QUEUE_ERROR.ZERO_BYTE_FILE: case SWFUpload.QUEUE_ERROR.INVALID_FILETYPE: default: alert(message); break; } addImage("images/" + imageName); } catch (ex) { this.debug(ex); } } function fileDialogComplete(numFilesSelected, numFilesQueued) { try { if (numFilesQueued 0) { this.startUpload(); } } catch (ex) { this.debug(ex); } } function uploadProgress(file, bytesLoaded) { try { var percent = Math.ceil((bytesLoaded / file.size) * 100); var progress = new FileProgress(file, this.customSettings.upload_target); progress.setProgress(percent); if (percent === 100) { progress.setStatus("Creating thumbnail..."); progress.toggleCancel(false, this); } else { progress.setStatus("Uploading..."); progress.toggleCancel(true, this); } } catch (ex) { this.debug(ex); } } function uploadSuccess(file, serverData) { try { var progress = new FileProgress(file, this.customSettings.upload_target); if (serverData.substring(0, 7) === "FILEID:") { addRow("tableID","thumbnail.php?id=" + serverData.substring(7),file.name); //setup(); //generateTinyMCE('itemdescription[]'); progress.setStatus("Thumbnail Created."); progress.toggleCancel(false); } else { addImage("images/error.gif"); progress.setStatus("Error."); progress.toggleCancel(false); alert(serverData); } } catch (ex) { this.debug(ex); } } function uploadComplete(file) { try { /* I want the next upload to continue automatically so I'll call startUpload here */ if (this.getStats().files_queued 0) { this.startUpload(); } else { var progress = new FileProgress(file, this.customSettings.upload_target); progress.setComplete(); progress.setStatus("All images received."); progress.toggleCancel(false); } } catch (ex) { this.debug(ex); } } function uploadError(file, errorCode, message) { var imageName = "error.gif"; var progress; try { switch (errorCode) { case SWFUpload.UPLOAD_ERROR.FILE_CANCELLED: try { progress = new FileProgress(file, this.customSettings.upload_target); progress.setCancelled(); progress.setStatus("Cancelled"); progress.toggleCancel(false); } catch (ex1) { this.debug(ex1); } break; case SWFUpload.UPLOAD_ERROR.UPLOAD_STOPPED: try { progress = new FileProgress(file, this.customSettings.upload_target); progress.setCancelled(); progress.setStatus("Stopped"); progress.toggleCancel(true); } catch (ex2) { this.debug(ex2); } case SWFUpload.UPLOAD_ERROR.UPLOAD_LIMIT_EXCEEDED: imageName = "uploadlimit.gif"; break; default: alert(message); break; } addImage("images/" + imageName); } catch (ex3) { this.debug(ex3); } } function addRow(tableID,src,filename) { var table = document.getElementById(tableID); var rowCount = table.rows.length; var row = table.insertRow(rowCount); rowCount + 1; row.id = "row"+rowCount; var cell0 = row.insertCell(0); cell0.innerHTML = rowCount; cell0.style.background = "#FFFFFF"; var cell1 = row.insertCell(1); cell1.align = "center"; cell1.style.background = "#FFFFFF"; var imahe = document.createElement("img"); imahe.setAttribute("src",src); var hidden = document.createElement("input"); hidden.setAttribute("type","hidden"); hidden.setAttribute("name","filename[]"); hidden.setAttribute("value",filename); /*var hidden2 = document.createElement("input"); hidden2.setAttribute("type","hidden"); hidden2.setAttribute("name","filename[]"); hidden2.setAttribute("value",filename); cell1.appendChild(hidden2);*/ cell1.appendChild(hidden); cell1.appendChild(imahe); var cell2 = row.insertCell(2); cell2.align = "left"; cell2.valign = "top"; cell2.style.background = "#FFFFFF"; //tr1.appendChild(td1); var div2 = document.createElement("div"); div2.style.padding ="0 0 0 10px"; div2.style.width = "400px"; var alink = document.createElement("a"); //alink.style.margin="40px 0 0 0"; alink.href ="#"; alink.innerHTML ="Cancel"; alink.onclick= function () { document.getElementById(row.id).style.display='none'; document.getElementById(textfield.id).disabled='disabled'; }; var div = document.createElement("div"); div.style.margin="10px 0"; div.appendChild(alink); var textfield = document.createElement("input"); textfield.id = "file"+rowCount; textfield.type = "text"; textfield.name = "itemname[]"; textfield.style.margin = "10px 0"; textfield.style.width = "400px"; textfield.value = "Item Name"; textfield.onclick= function(){ //textfield.value=""; if(textfield.value=="Item Name") textfield.value=""; if(desc.innerHTML=="") desc.innerHTML ="Item Description"; if(price.value=="") price.value="Item Price"; } var desc = document.createElement("textarea"); desc.name = "itemdescription[]"; desc.cols = "80"; desc.rows = "4"; desc.innerHTML = "Item Description"; desc.onclick = function(){ if(desc.innerHTML== "Item Description") desc.innerHTML = ""; if(textfield.value=="Item name" || textfield.value=="") textfield.value="Item Name"; if(price.value=="") price.value="Item Price"; } var price = document.createElement("input"); price.id = "file"+rowCount; price.type = "text"; price.name = "itemprice[]"; price.style.margin = "10px 0"; price.style.width = "400px"; price.value = "Item Price"; price.onclick= function(){ if(price.value=="Item Price") price.value=""; if(desc.innerHTML=="") desc.innerHTML ="Item Description"; if(textfield.value=="") textfield.value="Item Name"; } var span = document.createElement("span"); span.innerHTML = "View"; span.style.width = "auto"; span.style.padding = "10px 0"; var view = document.createElement("input"); view.id = "file"+rowCount; view.type = "checkbox"; view.name = "publicview[]"; view.value = "y"; view.checked = "checked"; var div3 = document.createElement("div"); div3.appendChild(span); div3.appendChild(view); var div4 = document.createElement("div"); div4.style.padding = "10px 0"; var span2 = document.createElement("span"); span2.innerHTML = "Default Display"; span2.style.width = "auto"; span2.style.padding = "10px 0"; var radio = document.createElement("input"); radio.type = "radio"; radio.name = "setdefault"; radio.value = "y"; div4.appendChild(span2); div4.appendChild(radio); div2.appendChild(div); //div2.appendChild(label); //div2.appendChild(table); div2.appendChild(textfield); div2.appendChild(desc); div2.appendChild(price); div2.appendChild(div3); div2.appendChild(div4); cell2.appendChild(div2); } function addImage(src,val_id) { var newImg = document.createElement("img"); newImg.style.margin = "5px 50px 5px 5px"; newImg.style.display= "inline"; newImg.id=val_id; document.getElementById("thumbnails").appendChild(newImg); if (newImg.filters) { try { newImg.filters.item("DXImageTransform.Microsoft.Alpha").opacity = 0; } catch (e) { // If it is not set initially, the browser will throw an error. This will set it if it is not set yet. newImg.style.filter = 'progid:DXImageTransform.Microsoft.Alpha(opacity=' + 0 + ')'; } } else { newImg.style.opacity = 0; } newImg.onload = function () { fadeIn(newImg, 0); }; newImg.src = src; } function fadeIn(element, opacity) { var reduceOpacityBy = 5; var rate = 30; // 15 fps if (opacity < 100) { opacity += reduceOpacityBy; if (opacity > 100) { opacity = 100; } if (element.filters) { try { element.filters.item("DXImageTransform.Microsoft.Alpha").opacity = opacity; } catch (e) { // If it is not set initially, the browser will throw an error. This will set it if it is not set yet. element.style.filter = 'progid:DXImageTransform.Microsoft.Alpha(opacity=' + opacity + ')'; } } else { element.style.opacity = opacity / 100; } } if (opacity < 100) { setTimeout(function () { fadeIn(element, opacity); }, rate); } } /* ************************************** * FileProgress Object * Control object for displaying file info * ************************************** */ function FileProgress(file, targetID) { this.fileProgressID = "divFileProgress"; this.fileProgressWrapper = document.getElementById(this.fileProgressID); if (!this.fileProgressWrapper) { this.fileProgressWrapper = document.createElement("div"); this.fileProgressWrapper.className = "progressWrapper"; this.fileProgressWrapper.id = this.fileProgressID; this.fileProgressElement = document.createElement("div"); this.fileProgressElement.className = "progressContainer"; var progressCancel = document.createElement("a"); progressCancel.className = "progressCancel"; progressCancel.href = "#"; progressCancel.style.visibility = "hidden"; progressCancel.appendChild(document.createTextNode(" ")); var progressText = document.createElement("div"); progressText.className = "progressName"; progressText.appendChild(document.createTextNode(file.name)); var progressBar = document.createElement("div"); progressBar.className = "progressBarInProgress"; var progressStatus = document.createElement("div"); progressStatus.className = "progressBarStatus"; progressStatus.innerHTML = "&nbsp;"; this.fileProgressElement.appendChild(progressCancel); this.fileProgressElement.appendChild(progressText); this.fileProgressElement.appendChild(progressStatus); this.fileProgressElement.appendChild(progressBar); this.fileProgressWrapper.appendChild(this.fileProgressElement); document.getElementById(targetID).appendChild(this.fileProgressWrapper); fadeIn(this.fileProgressWrapper, 0); } else { this.fileProgressElement = this.fileProgressWrapper.firstChild; this.fileProgressElement.childNodes[1].firstChild.nodeValue = file.name; } this.height = this.fileProgressWrapper.offsetHeight; } FileProgress.prototype.setProgress = function (percentage) { this.fileProgressElement.className = "progressContainer green"; this.fileProgressElement.childNodes[3].className = "progressBarInProgress"; this.fileProgressElement.childNodes[3].style.width = percentage + "%"; }; FileProgress.prototype.setComplete = function () { this.fileProgressElement.className = "progressContainer blue"; this.fileProgressElement.childNodes[3].className = "progressBarComplete"; this.fileProgressElement.childNodes[3].style.width = ""; }; FileProgress.prototype.setError = function () { this.fileProgressElement.className = "progressContainer red"; this.fileProgressElement.childNodes[3].className = "progressBarError"; this.fileProgressElement.childNodes[3].style.width = ""; }; FileProgress.prototype.setCancelled = function () { this.fileProgressElement.className = "progressContainer"; this.fileProgressElement.childNodes[3].className = "progressBarError"; this.fileProgressElement.childNodes[3].style.width = ""; }; FileProgress.prototype.setStatus = function (status) { this.fileProgressElement.childNodes[2].innerHTML = status; }; FileProgress.prototype.toggleCancel = function (show, swfuploadInstance) { this.fileProgressElement.childNodes[0].style.visibility = show ? "visible" : "hidden"; if (swfuploadInstance) { var fileID = this.fileProgressID; this.fileProgressElement.childNodes[0].onclick = function () { swfuploadInstance.cancelUpload(fileID); return false; }; } }; i am using a swfuploader an i jst added a input fields and a textarea when it preview the images which ready to be uploaded and from my html i have this script var swfu; window.onload = function () { swfu = new SWFUpload({ // Backend Settings upload_url: "../we_modules/upload.php", // Relative to the SWF file or absolute post_params: {"PHPSESSID": ""}, // File Upload Settings file_size_limit : "20 MB", // 2MB file_types : "*.*", //file_types : "", file_types_description : "jpg", file_upload_limit : "0", file_queue_limit : "0", // Event Handler Settings - these functions as defined in Handlers.js // The handlers are not part of SWFUpload but are part of my website and control how // my website reacts to the SWFUpload events. //file_queued_handler : fileQueued, file_queue_error_handler : fileQueueError, file_dialog_complete_handler : fileDialogComplete, upload_progress_handler : uploadProgress, upload_error_handler : uploadError, upload_success_handler : uploadSuccess, upload_complete_handler : uploadComplete, // Button Settings button_image_url : "../we_modules/images/SmallSpyGlassWithTransperancy_17x18.png", // Relative to the SWF file button_placeholder_id : "spanButtonPlaceholder", button_width: 180, button_height: 18, button_text : 'Select Files(2 MB Max)', button_text_style : '.button { font-family: Helvetica, Arial, sans-serif; font-size: 12pt;cursor:pointer } .buttonSmall { font-size: 10pt; }', button_text_top_padding: 0, button_text_left_padding: 18, button_window_mode: SWFUpload.WINDOW_MODE.TRANSPARENT, button_cursor: SWFUpload.CURSOR.HAND, // Flash Settings flash_url : "../swfupload/swfupload.swf", custom_settings : { upload_target : "divFileProgressContainer" }, // Debug Settings debug: false }); }; where should i put on the tinymce function as you mention below?

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >