Search Results

Search found 45 results on 2 pages for 'lynn adrianna'.

Page 1/2 | 1 2  | Next Page >

  • Windows 7 and Vista Home Networked-Unable to login from Vista to Windows 7

    - by Lynn
    I set up both Vista and Windows 7 on the same workgroup. I can view Windows 7 from Vista and vice versa. I can login into Vista from Windows 7. I am unable to login to Windows 7 from Vista. When I enter the Windows 7 User name and Password on Vista, the following information appears: Logon unsuccessful: Windows is unable to log you on. Be sure that your user name and password are correct. Both are correct. Do you have any idea how I can resolve this logon issue? Thank you, Lynn

    Read the article

  • My C# and DLL Data Woes

    - by Lynn
    Hey guys, I'm a very beginner C# coder. So, if I get some of the terms incorrect, please be easy on me. I'm trying to see if it is possible to pull data from a DLL. I did some research and found that you can store application resources within a DLL. What I couldn't find, was the information to tell me how to do that. There is a MS article that explains how to access resources within a satellite DLL, but I honestly don't know if that is what I'm looking for. http://msdn.microsoft.com/en-us/library/ms165653.aspx I did try some of the codes involved, but there are some "FileNotFoundExceptions" going on. The rest of the DLL information is showing up: classes, objects, etc. I just added the DLL as a resource in my Visual Studio Project and added it with "using". I just don't know how to get at the meat of it, if it is possible. Thanks, Lynn

    Read the article

  • Virtualization in Solaris 11 Express

    - by lynn.rohrer(at)oracle.com
    In Oracle Solaris 10 we introduced Oracle Solaris Containers -- lightweight virtual application environments that allow you to consolidate your Oracle Solaris applications onto a single Oracle Solaris server and make the most of your system resources.The majority of our customers are now using Oracle Solaris Containers on their enterprise systems for applications ranging from web servers to Oracle Database installations. We can also make these Containers highly available with Oracle Solaris Cluster, the industry's first virtualization-aware enterprise cluster product. Using Oracle Solaris Cluster you can failover applications in a Container to another Container on a single system or across systems for additional availability.We've added significant features in Oracle Solaris 11 Express to improve and extend the Oracle Solaris Zone model:Integration of Zones with our new Solaris 11 packaging system (aka Image Packaging System) to provide easy software updates within a zoneSupport for Oracle Solaris 10 Zones to run your Solaris 10 applications unaltered on an Oracle Solaris 11 Express systemIntegration with the new Oracle Solaris 11 network stack architecture (more on this in a future blog post)Improved observability with the zonestat management interface and commandsDelegated administration rights for owners of individual non-global zonesTight integration with Oracle Solaris ZFS to allow dedicated datasets per zoneWith ZFS as the default file system we can now provide easy to manage Boot Environments for zonesThis quick summary is just to whet your appetite to learn more about Oracle Solaris 11 Express Zones enhancements. Fortunately we can serve a full meal at the Oracle Solaris 11 Express Technology Spotlight on Virtualization page on the Oracle Technical Network.

    Read the article

  • Google bots are severely affecting site performance

    - by Lynn
    I have an aggregate site on a linux server that pulls in feeds from a universe of about 2,000 blogs. It's in Wordpress 3.4.2 and I have a cron job that is staggered to run five times an hour on another server to pull in the stories and then publish them to the front page of this site. This is so I didn't put too much pressure all on one server. However, the Google bots, which visit a few times every hour bring the server to its knees in the morning and evenings when there is an increase in traffic on the site. The bots have something like 30,000 links to follow at this point. How do I throttle the bots to simply grab the new stories off the front page and stop there? EDIT- Details of my server configuration: The way we have this set up is the server that handles all the publishing is an unmanaged instance via AWS. It mounts the NFS server and connects to the RDS to update content, etc. You get to this publishing instance via a plugin that detects the wp-admin link and then redirects you into there. The front end app server also mounts the NFS and requests data from the RDS. It is the only one that has the WP Super Cache on it.... The OS is Ubuntu on the App server and the NFS runs CentOs. The front end is Nginx and the publishing server is Apache.

    Read the article

  • Searching for Ubuntu video I saw that covered Ubuntu One

    - by Elijah Lynn
    I have been searching for an Ubuntu video I saw 1-2 months ago where they guy was demonstrating Ubuntu One and how he used it to sync local profile stuff across different machines. Has anyone seen this? I forgot exactly what it was about but I know he spent a minute or two on the video if not the whole video. I searched all my web history as well as youtube history but no go. Not sure if we are allowed to ask these questions but here goes.

    Read the article

  • Ubuntu 12.04 "Do nothing" when "lid closed" blanks out external monitor

    - by Elijah Lynn
    This is not the same as this question. I have Ubuntu 12.04 running with an Nvidia card on a W510 Thinkpad. I have one external monitor connected. When I change the power settings to "Do nothing" when "Lid closed" it still keeps the system running which is great. However, it blanks out the display on any external monitors making the system useless. I plan on getting a dock soon and having to identical resolution monitors and would love to be able to dock the monitor and work as normal on the external monitors. Does anyone have a suggestion or fix for this? Should I report this as a bug or feature request?

    Read the article

  • High end mobile workstations with pointer stick

    - by Elijah Lynn
    I am looking for a list of higher end mobile workstations that run Ubuntu/Kubuntu well and also have a hardware pointer stick. Here's an illustration of one (from sciencesurvivalblog): I wouldn't mind getting a Macbook Pro and wiping it but they refuse to use pointer sticks and to me, they are extremely efficient. I see a lot of potential for Lenovo thinkpads as well. System 76 said they have no plans to implement a hardware pointer stick so that leaves them out as well. Any ideas?

    Read the article

  • Problem with JavaScript arithmetic

    - by Lynn
    I have a form for my customers to add budget projections. A prominent user wants to be able to show dollar values in either dollars, Kila-dollars or Mega-dollars. I'm trying to achieve this with a group of radio buttons that call the following JavaScript function, but am having problems with rounding that make the results look pretty crummy. Any advice would be much appreciated! Lynn function setDollars(new_mode) { var factor; var myfield; var myval; var cur_mode = document.proj_form.cur_dollars.value; if(cur_mode == new_mode) { return; } else if((cur_mode == 'd')&&(new_mode == 'kd')) { factor = "0.001"; } else if((cur_mode == 'd')&&(new_mode == 'md')) { factor = "0.000001"; } else if((cur_mode == 'kd')&&(new_mode == 'd')) { factor = "1000"; } else if((cur_mode == 'kd')&&(new_mode == 'md')) { factor = "0.001"; } else if((cur_mode == 'md')&&(new_mode == 'kd')) { factor = "1000"; } else if((cur_mode == 'md')&&(new_mode == 'd')) { factor = "1000000"; } document.proj_form.cur_dollars.value = new_mode; var cur_idx = document.proj_form.cur_idx.value; var available_slots = 13 - cur_idx; var td_name; var cell; var new_value; //Adjust dollar values for projections for(i=1;i<13;i++) { var myfield = eval('document.proj_form.proj_'+i); if(myfield.value == '') { myfield.value = 0; } var myval = parseFloat(myfield.value) * parseFloat(factor); myfield.value = myval; if(i < cur_idx) { document.getElementById("actual_"+i).innerHTML = myval; } }

    Read the article

  • Is it possible to split HTML using DOMDocument?

    - by Lynn Adrianna
    Using DOMDocument, is it possible to split a block of HTML by text wrapped in tags and those that are not, while maintaining the order? Sorry, if this doesn't make sense. My example should make it clear. Let's say I have the following block of HTML: text1<b style="color:pink">text2</b>text3<b>text4</b> <b style="font-weight:bold">text5</b> Is it possible create an array as such: array( [0] => text1 [1] => <b style="color:pink">text2</b> [2] => text3 [3] => <b>text4</b> [4] => [5] => <b style="font-weight:bold">text5</b> ) Below is my current working solution, which uses a regular expression, to split the HTML. $tokens = preg_split('/(<b\b[^>]*>.*?<\/b>)/i', $html, null, PREG_SPLIT_DELIM_CAPTURE); However, I always read that it is a bad idea to parse HTML using regular expressions, so was just wondering if there is a better way.

    Read the article

  • OpenLDAP mirror mode replication failing with TLS behind a load balancer

    - by Lynn Owens
    I have two OpenLDAP servers that are both running TLS. They are: ldap1.mydomain.com ldap2.mydomain.com I also have a load balancer cluster with a dns name of it's own: ldap.mydomain.com The SSL certificate has a CN of ldap.mydomain.com, with SANs of ldap1.mydomain.com and ldap2.mydomain.com. Everything works... Except mirror mode replication. My mirror mode replication is setup like this: ldap.conf TLS_REQCERT allow cn=config.ldif olcServerID: 1 ldap://ldap1.mydomain.com olcServerID: 2 ldap://ldap2.mydomain.com On ldap1, olcDatabase{1}hdb.ldif olcMirrorMode: TRUE olcSyncrepl: {0}rid=001 provider=ldap://ldap2.mydomain.com bindmethod=simple bindmethod=simple binddn="cn=me,dc=mydomain,dc=com" credentials="REDACTED" starttls=yes searchbase="dc=mydomain,dc=com" schemachecking=on type=refreshAndPersist retry="60 +" On ldap2, olcDatabase{1}hdb.ldif olcMirrorMode: TRUE olcSyncrepl: {0}rid=001 provider=ldap://ldap1.mydomain.com bindmethod=simple bindmethod=simple binddn="cn=me,dc=mydomain,dc=com" credentials="REDACTED" starttls=yes searchbase="dc=mydomain,dc=com" schemachecking=on type=refreshAndPersist retry="60 +" Here's the errors I'm getting in syslog: Dec 1 21:05:01 ldap1 slapd[6800]: slap_client_connect: URI=ldap://ldap2.mydomain.com DN="cn=me,dc=mydomain,dc=com" ldap_sasl_bind_s failed (-1) Dec 1 21:05:01 ldap1 slapd[6800]: do_syncrepl: rid=001 rc -1 retrying Dec 1 21:05:08 ldap1 slapd[6800]: conn=1111 fd=20 ACCEPT from IP=ldap.mydomain.com:2295 (IP=ldap1.mydomain.com:636) Dec 1 21:05:08 ldap1 slapd[6800]: conn=1111 fd=20 closed (TLS negotiation failure) Any ideas? I've been working on OpenLdap for way too long now.

    Read the article

  • Win7 playback of dvr-ms files stutters

    - by Jim Lynn
    I've just had to install Windows 7 on my Media Center machine because my Vista installation had a faulty drive. I've got the latest drivers that I can find - Intel 945GM integrated Graphics, Realtek audio drivers. Things are working OK with one exception. Playback of old recordings, from dvr-ms format files, is choppy. The picture freezes for a fraction of a second, then quickly catches up. The sound is uninterrupted and doesn't pause. These freezes happen once every 5 seconds or so. It's very regular. Playback of Live TV from the digital tuner is perfectly smooth. DVD playback is perfectly smooth. As an experiment, I used the MPEG editing package VideoReDo to create a small test file in three different formats. This program takes the raw MPEG streams and repackages them into the desired container. I took the same clip and created three files in three formats: dvr-ms (Microsoft's old recorded TV format); mpg (standard MPEG); and ts (raw MPEG transport stream of the kind often produced by PVRs). When these three files are played back under Windows 7, the mpg and ts files play smoothly, but the dvr-ms file stutters. The last piece of data I have is that two other Windows 7 machines can play back dvr-ms files smoothly with no stuttering. One is a netbook, with less grunt than the media centre. So there must be something specific about my Media Center machine that's causing the problem. Does anyone have any idea where I can look now? I don't know much about AV software, codecs, filter graphs etc. but I suspect that's where the problem lies. Rendering the video isn't the problem, but extracting the streams is. How would I go about diagnosing the problem? Edited to add: I just used the GraphStudio tool to look at the filter graph on the offending PC. The filter graph it uses by default for dvr-ms looks identical to the other machines, and, interestingly, when I play the files using GraphStudio they run smoothly. Under Windows Media Player and Windows Media Center they stutter. I'd like to see the filter graph for WMP but GraphStudio won't show it. It looks like WMP and WMC are using a different decoding path to GraphStudio. Edited again to add: Today I purchased a new HDTV. The same Media Center driving the TV at 1080p is now playing back the old Recorded TV files smoothly, without stuttering. So whatever the cause of the original problem, using a different resolution seems to have removed the problem. It might also explain why nobody else has had this problem. I doubt many people use Media Centre with a 14in portable TV.

    Read the article

  • ldap_modify: Insufficient access (50)

    - by Lynn Owens
    I am running an OpenLDAP 2.4 server that uses the SSL service for communication. It works for lookups. I am trying to add mirror mode replication. So this is the command that I'm executing: ldapmodify -D "cn=myuser,dc=mydomain,dc=com" -H ldaps://myloadbalancer -W -f /etc/ldap/ldif/server_id.ldif Where this is my server_id.ldif: dn: cn=config changetype: modify replace: olcServerID olcServerID: 1 myserver1 olcServerID: 2 myserver2 and this is my cn\=config.ldif in the slapd.d tree of text files: dn: cn=config objectClass: olcGlobal cn: config olcArgsFile: /var/run/slapd/slapd.args olcPidFile: /var/run/slapd/slapd.pid olcToolThreads: 1 structuralObjectClass: olcGlobal entryUUID: ff9689de-c61d-1031-880b-c3eb45d66183 creatorsName: cn=config createTimestamp: 20121118224947Z olcLogLevel: stats olcTLSCertificateFile: /etc/ldap/certs/ldapscert.pem olcTLSCertificateKeyFile: /etc/ldap/certs/ldapskey.pem olcTLSCACertificateFile: /etc/ldap/certs/ldapscert.pem olcTLSVerifyClient: never entryCSN: 20121119022009.770692Z#000000#000#000000 modifiersName: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth modifyTimestamp: 20121119022009Z But unfortunately I'm getting this: Enter LDAP Password: modifying entry "cn=config" ldap_modify: Insufficient access (50) If I try to specify the config database I get this: ldapmodify -H 'ldaps://myloadbalancer/cn=config' -D "cn=myuser,cn=config" -W -f ./server_id.ldif Enter LDAP Password: ldap_bind: Invalid credentials (49)} Does anyone know how I can add the serverID to the config database so that I can complete the setup of mirror mode?

    Read the article

  • Trouble with local id / remote id configuration of VPN

    - by Lynn Owens
    I have a NetGear UTM firewall and a Windows machine running NetGear's VPN client. The Windows machine I can put on the UTM network and take off of it. When I am cabled into the local (internal) the following configuration works: UTM: Local Id: Local Wan IP: (The UTM's WAN IP address) Remote Id: User FQDN: utm_remote1.com Client: Local Id: DNS: utm_remote1.com Remote Id: (The UTM's WAN IP address) Gateway authentication: preshared key Policy remote endpoint: FQDN: utm_remote1.com But when I'm off the UTM's internal local network and simply coming in from the internet, this does not work. It simply repeats SEND phase 1 before giving up. Since I know that the UTM WAN IP is accessible from both inside and outside the network, I figured the problem was with the Client local id. So, I tried the following: UTM: Local Id: Local Wan IP: (The UTM's WAN IP address) Remote Id: (A DN of a self-signed certificate I created for the client and uploaded into the UTM certificates) Client: Local Id: (The DN of the aforementioned self signed cert) Remote Id: (The UTM's WAN IP address) Gateway authentication: (the aforementioned self signed cert) Policy remote end point: ... er, ... my choices are IP and FQDN.... Not sure what to put here No matter what I've tried, it just keeps repeating the SEND phase 1. Any ideas?

    Read the article

  • Windows 7 playback of dvr-Microsoft files stutters

    - by Jim Lynn
    I've just had to install Windows 7 on my Media Center machine because my Vista installation had a faulty drive. I've got the latest drivers that I can find - Intel 945GM integrated Graphics, Realtek audio drivers. Things are working OK with one exception. Playback of old recordings, from dvr-Microsoft format files, is choppy. The picture freezes for a fraction of a second, then quickly catches up. The sound is uninterrupted and doesn't pause. These freezes happen once every 5 seconds or so. It's very regular. Playback of Live TV from the digital tuner is perfectly smooth. DVD playback is perfectly smooth. As an experiment, I used the MPEG editing package VideoReDo to create a small test file in three different formats. This program takes the raw MPEG streams and repackages them into the desired container. I took the same clip and created three files in three formats: dvr-Microsoft (Microsoft's old recorded TV format); mpg (standard MPEG); and ts (raw MPEG transport stream of the kind often produced by PVRs). When these three files are played back under Windows 7, the mpg and ts files play smoothly, but the dvr-Microsoft file stutters. The last piece of data I have is that two other Windows 7 machines can play back dvr-Microsoft files smoothly with no stuttering. One is a netbook, with less grunt than the media centre. So there must be something specific about my Media Center machine that's causing the problem. Does anyone have any idea where I can look now? I don't know much about AV software, codecs, filter graphs etc. but I suspect that's where the problem lies. Rendering the video isn't the problem, but extracting the streams is. How would I go about diagnosing the problem? Edited to add: I just used the GraphStudio tool to look at the filter graph on the offending PC. The filter graph it uses by default for dvr-Microsoft looks identical to the other machines, and, interestingly, when I play the files using GraphStudio they run smoothly. Under Windows Media Player and Windows Media Center they stutter. I'd like to see the filter graph for Windows Media Player but GraphStudio won't show it. It looks like Windows Media Player and WMC are using a different decoding path to GraphStudio. Edited again to add: Today I purchased a new HDTV. The same Media Center driving the TV at 1080p is now playing back the old Recorded TV files smoothly, without stuttering. So whatever the cause of the original problem, using a different resolution seems to have removed the problem. It might also explain why nobody else has had this problem. I doubt many people use Media Centre with a 14in portable TV.

    Read the article

  • Windows 7 Premium Error

    - by Lynn
    I am getting the following error details: Problem signature: Problem Event Name: BlueScreen OS Version: 6.1.7601.2.1.0.768.3 Locale ID: 1033 Additional information about the problem: BCCode: 9f BCP1: 0000000000000003 BCP2: FFFFFA8004281A10 BCP3: FFFFF80000B9C518 BCP4: FFFFFA8007BD5C10 OS Version: 6_1_7601 Service Pack: 1_0 Product: 768_1 Files that help describe the problem: C:\Windows\Minidump\121212-74334-01.dmp C:\Users\brianna\AppData\Local\Temp\WER-251161-0.sysdata.xml Anyone have any suggestions? Thanks!

    Read the article

  • VPN: What should my Gateway remote ID be?

    - by Lynn Owens
    I have a Netgear ProSafe UTM. I set the Gateway local ID to it's WAN IP. But I'm not sure what to put for it's Remote ID. I want to be able to connect to it from a laptop across the internet. I can chose between: Remote IP FQDN Client FQDN Cert DN Frankly I've tried them messing around with them all but I'm just shooting in the dark, and the help desk docs are worthless. Also, Googling around seems to end up with lots of pages not really related to what I want. A lot of pages on configuring Cisco or Windows home networking or privacy advocates.

    Read the article

  • Google bots are severely affecting site performance

    - by Lynn
    I have an aggregate site on a linux server that pulls in feeds from a universe of about 2,000 blogs. It's in Wordpress 3.4.2 and I have a cron job that is staggered to run five times an hour on another server to pull in the stories and then publish them to the front page of this site. This is so I didn't put too much pressure all on one server. However, the Google bots, which visit a few times every hour bring the server to its knees in the morning and evenings when there is an increase in traffic on the site. The bots have something like 30,000 links to follow at this point. How do I throttle the bots to simply grab the new stories off the front page and stop there? EDIT- Details of my server configuration: The way we have this set up is the server that handles all the publishing is an unmanaged instance via AWS. It mounts the NFS server and connects to the RDS to update content, etc. You get to this publishing instance via a plugin that detects the wp-admin link and then redirects you into there. The front end app server also mounts the NFS and requests data from the RDS. It is the only one that has the WP Super Cache on it.... The OS is Ubuntu on the App server and the NFS runs CentOs. The front end is Nginx and the publishing server is Apache.

    Read the article

  • Can't mount home after trying to resize (bad geometry: block count exceeds size of device).

    - by Lynn
    This is on a fresh computer (super computer actually). It got to me with 15T on the home mount and 50G on the root. I tried allocating 7T to root and resizing (since I'm putting a local yum repo on this machine as it has no internet access nor will it ever). I tried following the instructions here: Centos 6.3 disk space allocation but something went wrong and the home won't mount again. Instead I get from dmesg | tail: EXT4-fs (dm-2): bad geometry: block count 4294967295 exceeds size of device (1342177280 blocks) df -h nets this output: Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-lv_root 7.0T 3.6G 6.6T 1% / tmpfs 190G 216K 190G 1% /dev/shm /dev/sda1 485M 38M 422M 9% /boot I didn't have any files on /dev/mapper/VolGroup-lv_home. Will simply running mke2fs fix it to be mountable? What sort of options should I run it with. I've never resized volumes before or used mke2fs. I don't want to make this mess worse.

    Read the article

  • Django ImageField issue with JPEG's

    - by Kieran Lynn
    I am having a major issue with PIL (Python Image Library) in Django and have jumpped through a lot of hoops and have thus far not been able to figure out what the root of the issue is. The problem essentially breaks down to not being able to upload JPEG images through the ImageField in the Django admin. But the issue is not as simple as installing libjpeg. First, I installed PIL (through Buildout) and realized once it was installed that I had not installed libjpeg because JPEG support was not available. Having not setup the server myself, I just assumed that it was not installed and I compiled libjpeg 8 from the source. This ended up in my /usr/local/lib/ directory. I cleared out my Buildout files and rebuilt everything. This time when PIL compiled I had JPEG support. But I went to the Django Admin and tried to upload a JPEG though an ImageField with no luck. I got the "Upload a valid image. The file you uploaded was either not an image or a corrupted image" error. Just as a test I opened up a the Djano shell and ran the following: > import Image > i = Image.open( "/absolute_path/file.jpg" ) > print i <JpegImagePlugin.JpegImageFile image mode=RGB size=940x375 at 0x7F908C529BD8> This runs with no errors and shows that PIL is able to open JPEG's. After doing some reading, I come across this thread: Is it possible to control which libraries apache uses? Looks like PHP also uses libjpeg and is loading before Django, and therefor loading libjpeg 6.2 before. This is show when using lsof: COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME apache2 2561 www-data mem REG 202,1 146032 639276 /usr/lib/libjpeg.so.62.0.0 So my thought is that I should be using libjpeg 6.2. So I removed libjpeg located in my /usr/local/lib directory. After rereading the PIL installation instructions, I realized that I might not have the dev/header files for libjpeg that PIL needs. So I also uninstalled libjpeg using the aptitude uninstaller (sudo aptitude remove libjpeg62). Then to ensure that I got the header files that PIL needed I installed libjpeg using aptitude: (sudo aptget install libjpeg62-dev). From here I cleaned out my Buildout directory, and reran Buildout, which in turn reinstalled PIL. Once again, I have JPEG support, now using the libjpeg62. So I go to test in the Django Admin. Still no JPEG support. So I wanted to test JPEG support in general and see if the exception was not handled, what kind of error it would throw. So in my homepage view I added the following code to open a JPEG image: import Image i = Image.open( "/absolute_path/file.jpg" ) v = i.verify() Then I pass i to the HTML view just to easily see the output. I deploy these changes to the server and restart. I am surprised not to see an error and get the following output: {{ i }} - <JpegImagePlugin.JpegImageFile image mode=RGB size=940x375 at 0x7F908C529BD8> {{ v }} - None So at this point I am really confused: Why can I successfully open a JPEG while the admin cannot? Am I missing something, is this not an issue with libjpeg? If not an issue with libjpeg, why can I upload a PNG with no issues? Any help would be much appreciated, I have been on this for 2 days debugging with no luck. Setup: 1. Rackspace Cloud Server 2. Ubuntu 10.04 3. Django 1.2.3 (Installed though Buildout) 4. PIL 1.1.7 (Installed though Buildout) 5. libjpeg 6.2 (installed through aptitude (sudo aptget install libjpeg62-dev)

    Read the article

  • c#: exporting swf object as image to Word

    - by Lynn
    Hello in my Asp.net web page (C# on backend) I use a Repeater, whose items consist of a title and a Flex chart (embedded .swf file). I am trying to export the contents of the Repeater to a Word document. My problem is to convert the SWF files into images and pass it on to the Word document. The swf object has a public function which returns a byteArray representation of itself (public function grabScreen():ByteArray), but I do not know how to call it directly from c#. I have access to the mxml files, so I can make modifications to the swf files, if needed. The code is shown below, and your help is appreciated :) .aspx <asp:Button ID="Button1" runat="server" text="export to Word" onclick="print2"/> <asp:Repeater ID="rptrQuestions" runat="server" OnItemDataBound="rptrQuestions_ItemDataBound" > ... <ItemTemplate> <tr> <td> <div align="center"> <asp:Label class="text" Text='<%#DataBinder.Eval(Container.DataItem, "Question_title")%>' runat="server" ID="lbl_title" NAME="lbl_title"/> <br> </div> </td> </tr> <tr><td> <object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" id="result_survey" width="100%" height="100%" codebase="http://fpdownload.macromedia.com/get/flashplayer/current/swflash.cab"> <param name="movie" value="result_survey.swf" /> <param name="quality" value="high" /> <param name="bgcolor" value="#ffffff" /> <param name="allowScriptAccess" value="sameDomain" /> <param name="flashvars" value='<%#DataBinder.Eval(Container.DataItem, "rank_order")%>' /> <embed src="result_survey.swf?rankOrder='<%#DataBinder.Eval(Container.DataItem, "rank_order")%>' quality="high" bgcolor="#ffffff" width="100%" height="100%" name="result_survey" align="middle" play="true" loop="false" allowscriptaccess="sameDomain" type="application/x-shockwave-flash" pluginspage="http://www.adobe.com/go/getflashplayer"> </embed> </object> </td></tr> </ItemTemplate> </asp:Repeater> c# protected void print2(object sender, EventArgs e) { HttpContext.Current.Response.Clear(); HttpContext.Current.Response.Charset = ""; HttpContext.Current.Response.ContentEncoding = System.Text.Encoding.UTF7; HttpContext.Current.Response.Cache.SetCacheability(HttpCacheability.NoCache); HttpContext.Current.Response.ContentType = "application/msword"; HttpContext.Current.Response.AddHeader("content-disposition", "attachment;filename=" + "Report.doc"); EnableViewState = false; System.IO.StringWriter sw = new System.IO.StringWriter(); HtmlTextWriter htw = new HtmlTextWriter(sw); // Here I render the Repeater foreach (RepeaterItem row in rptrQuestions.Items) { row.RenderControl(htw); } StringBuilder sb1 = new StringBuilder(); sb1 = sb1.Append("<table>" + sw.ToString() + "</table>"); HttpContext.Current.Response.Write(sb1.ToString()); HttpContext.Current.Response.Flush(); HttpContext.Current.Response.End(); } .mxml //################################################## // grabScreen (return image representation of the SWF movie (snapshot) //###################################################### public function grabScreen() : ByteArray { return ImageSnapshot.captureImage( boxMain, 0, new PNGEncoder() ).data(); }

    Read the article

  • Disabling model's after_find only when called from certain controllers

    - by Lynn C
    I have an after_find callback in a model, but I need to disable it in a particular controller action e.g. def index @people = People.find(:all) # do something here to disable after_find()? end def show @people = People.find(:all) # after_find() should still be called here! end What is the best way to do it? Can I pass something in to .find to disable all/particular callbacks? Can I somehow get the controller name in the model and not execute the callback based on the controller name (I don't like this)..? Help!

    Read the article

  • ajax request rely on the previous one

    - by Lynn
    I want to do something like this: $.ajax({ url: SOMEWHERE, ... success: function(data){ // do sth... var new_url = data.url; $.ajax({ url: new_url, success: function(data){ var another_url = data.url; // ajax call rely on the result of previous one $.ajax({ // do sth }) } }) }, fail: function(){ // do sth... // ajax call too $.ajax({ // config }) } }) the code looks like shit. I wonder how to make it looks pretty. Some best practice?

    Read the article

  • GET params in ruby-on-rails project - best practices?

    - by Lynn C
    I've inherited a little rails app and I need to extend it slightly. It's actually quite simple, but I want to make sure I'm doing it the right way... If I visit myapp:3000/api/persons it gives me a full list of people in XML format. I want to pass param in the URL so that I can return users that match the login or the email e.g. yapp:3000/api/persons?login=jsmith would give me the person with the corresponding login. Here's the code: def index if params.size > 2 # We have 'action' & 'controller' by default if params['login'] @person = [Person.find(:first, :conditions => { :login => params['login'] })] elsif params['email'] @persons = [Person.find(:first, :conditions => { :email => params['email'] })] end else @persons = Person.find(:all) end end Two questions... Is it safe? Does ActiveRecord protect me from SQL injection attacks (notice I'm trusting the params that are coming in)? Is this the best way to do it, or is there some automagical rails feature I'm not familiar with?

    Read the article

  • Starting activity from packageinfo

    - by Nyan Lynn Htun
    Is there a way run an intent from packageinfo? I've been searching and I don't find it. I tried like that Intent i = new Intent(); i.setAction(Intent.ACTION_MAIN); i.addCategory(Intent.CATEGORY_LAUNCHER); i.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK | Intent.FLAG_ACTIVITY_RESET_TASK_IF_NEEDED); i.setComponent(new ComponentName(p.applicationInfo.packageName,p.applicationInfo.name)); startActivity(i); but it doesn't work because [p.applicationInfo.name] is always null.

    Read the article

1 2  | Next Page >