Search Results

Search found 81493 results on 3260 pages for 'file size'.

Page 374/3260 | < Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >

  • .php file blank - .php5 files works

    - by Kleidi
    I have a problem with a server of mine. I've installed virtualmin/webin on it for administration and I have 1 domain on it. DNS management is external. On this domain I only have an html "Under Construction" index and 5 subdomains. In all those subdomains I have PHP systems running perfectly. I've tried to install Wordpress on the main domain and I'm having some issues: None .php files loads. I have made a phpinfo file on it to check it and it won't work either; only a blank page appears. When I check the source code of it in browser, appears the code. I have changed the extensions to .php5 and it worked perfectly. Something is going wrong with it but I can't figure out what. I have checked the apache error and nothing appears. 3 Days ago I upgraded from php 5.2.* to 5.4.21. Server is running CentOS 5.10.

    Read the article

  • redirecting output from telnet / nc to file in script fails when cron'd

    - by qhartman
    So, I have device on my network which sits there listening on a port for a connection, and when a connection is made it dumps ascii data out. I need to capture that data to a file. I wrote a dead simple shell script that does this: #!/bin/bash #Config Variables. Age is in Days. DATA_ROOT=/root/data FILENAME=data_`date +%F`.dat HOST=device COMPRESS_AGE=3 #Sanity Checks if [ ! -e $DATA_ROOT ] then echo "The directory $DATA_ROOT seems to not exist. Please create it." exit 1 fi if [ -e $DATA_ROOT/$FILENAME ] then echo "You seem to have extracted data already today. Aborting" exit 1 fi #Get Data nc $HOST 2202 > $DATA_ROOT/$FILENAME #Compress old Data find $DATA_ROOT -type f -mtime +$COMPRESS_AGE -exec gzip {} \; exit 0 It works great when I run it by hand, but when I run it from cron, it doesn't capture any of the output. If I replace nc with telnet I see the initial telnet headers about escape sequences and whatnot, but not the data. Ideas? I've tried forcing bash to act like an interactive shell with -i. I've tried redirecting both stderr and stdout. I know it's got to be some silly simple thing, but I'm utterly failing. This is driving me nuts... EDIT I also just noticed that the nc processes from all my previous attempts at this have been siting sleeping, and when I killed them, cron sent me a bunch of non-sensical error messages. At least now I have something to dig into!

    Read the article

  • Write stderror to a file using PowerShell

    - by Zian Choy
    How do I capture error messages from a PowerShell-launched command in a text file? I searched the Internet for a while and found that supposedly, I should be able to do something like cmd /c "big blob of text >C:\output.txt 2>c:\errors.txt" to direct the output to output.txt and the errors to errors.txt but when I try to run the command, I get the following error: cmd.exe : The filename, directory name, or volume label syntax is incorrect. At C:\Users\Zian\Desktop\Untitled1.ps1:27 char:4 + cmd <<<< /c $command + CategoryInfo : NotSpecified: (The filename, d...x is incorrect.:String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError Furthermore, if I try to run the command without everything starting at "2", then the command executes correctly and output.txt catches the right output. I looked at Redirect stderr to variable in powershell but it wasn't helpful because the answer to that question suggests capturing the entire output and filtering it in memory. In my case, I am backing up every database on a computer and since the databases won't fit in my laptop's RAM, I cannot use the question's solution. I also found tantalizing suggestions about using $err = @(command goes here) but with no information on what to do other than simply inserting that line of text. I tried to utilize the search function on Serverfault with the string "@()", but it did not return any results. What can I do to get the error messages into errors.txt?

    Read the article

  • WSUS registry file: NoAutoRebootWithLoggedOnUsers entry being ignored

    - by the_pete
    We are using a registry entry to connect our internal workstations to our WSUS server and everything seems to be working except the NoAutoRebootWithLoggedOnUsers entry. Without fail, over the last few weeks, our lab setup as well as our users have been prompted to restart their machines with a 15 minute time out and there's nothing they can do about it. They can't postpone or cancel the restart, all options in the prompt are greyed out. Below is the registry file we are using to connect our workstations to our WSUS server: Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate] "AcceptTrustedPublisherCerts"=dword:00000001 "ElevateNonAdmins"=dword:00000000 "WUServer"="http://xxx.xxx.xxx.xxx:8530" "WUStatusServer"="http://xxx.xxx.xxx.xxx:8530" [HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate\AU] "AUOptions"=dword:00000004 "AutoInstallMinorUpdates"=dword:00000001 "DetectionFrequencyEnabled"=dword:00000001 "DetectionFrequency"=dword:00000002 "NoAutoUpdate"=dword:00000000 "NoAutoRebootWithLoggedOnUsers"=dword:00000001 "RebootRelaunchTimeout"=dword:00000030 "RebootRelaunchTimeoutEnabled"=dword:00000001 "RescheduleWaitTime"=dword:00000020 "RescheduleWaitTimeEnabled"=dword:00000001 "ScheduledInstallDay"=dword:00000000 "ScheduledInstallTime"=dword:00000003 "UseWUServer"=dword:00000001 There is a bit of redundancy, if you want to call it that, having both the NoAutoRebootWithLoggedOnUsers entry as well as the entries for RebootRelaunchTimeout but we wanted to see if we could either disable the restart, or give our users a larger window within which they could wrap up their work, etc. before restarting. Neither of these entries seems to work, but our priority is getting NoAutoRebootWithLoggedOnUsers working and any help with this would be greatly appreciated.

    Read the article

  • rsync per-site configuration file?

    - by Scott
    I know how to configure a per-site entry for ssh, but is there any kind of a client configuration for rsync that allows per-site configuration options and aliases or similar shortcuts like the .ssh/config? I'm curious because I have a minimal ssh server installed on my android phone and I also have a minimal rsync tool on it as well. I'm getting tired of having to root login onto the phone and sym-link both tools to standard places the android OS looks for executables as the ssh server is bare bones and has a typical *bear multi-link binary for the basic unix commands (that does not include rsync) I end up having to include --rsync-path=/path/to/rsync/android/files/rsync every time I want to do any rsyncing of the files on my phone, but this path is always the same. I've gotten around it in the meantime with a glob approach in a shell script wrapper, but this sometimes limits the customization I can do with the rsync call. I'm just wondering if there is anything similar to the .ssh/config file where I can create an alias for my phone (e.g. 'android') where specifying rsync android:/mnt/sdcard will automatically assume --rsync-path=/blah/blah/blah --no-g --no-p --no-t etc. Tre`

    Read the article

  • Basics about file/folder permissions on Win 7

    - by Altar
    Hi. Under Win XP I never touched the permissions of a file/folder. I was happy with the way it worked. But recently I installed Windows 7 on a drive that previously hosted Windows XP. Now, some programs do not have 'read' and/or 'write' access to their own folders - and I am not talking about system folders like 'Program Files' but normal folders like 'C:\my data\my own folder\program folder'. I see that for folders created under Win XP I have some user groups that do not exist for 'normal' folders (folders created by me recently under Windows 7). For example, for the Win XP folder I have: Creator owner System Account unknown(S-1-5-21 blablabla... Admins Users For Win7 folders I have: Authenticated users System Admins Users How should I proceed? Should I give the right to the "Users" account to write to XP folders? Should I make the old folders (the XP folders) to have the same groups of users as the normal (Win7) ones by adding the "Authenticated users" account to those folders? Should I delete the "Account unknown" account from my system? (In this case, how?) Many thanks.

    Read the article

  • User unable to delete folder / files "File in use by another user" Server 2003

    - by Az
    I am administering a standalone Windows 2003 Terminal Server with no domain membership. Occasionally (about once a week or so) a user will attempt to delete a sub-folder in a Shared folder and gets denied with "File in use by another user". I tried checking the shared folder snap-in and that folder is not open. She has full control and is the owner of the folder as well. I even checked in "Effective permissions" for some of the folders / files she cant delete and she truly has full control. I am able to delete the folder as Administrator with no problem. Another odd thing, she can delete the files IN the folder most of the time (this issue happens on both folders and files in the share). Sometimes merely waiting a day or two will allow her to delete the folder or files. I am curious as to why she gets the message that it is in use as creator/owner with full control yet I don't get it simply as a member of the Admin group. If anyone out there has any ideas I'd love to hear them! THANK YOU.

    Read the article

  • Slower/cached Linux file system required

    - by Chopper3
    I know it sounds odd but I need a slower or cached filesystem. I have a lot of firewalls that are syslog'ing their data to a pair of Linux VMs which write these files to their 'local' (actually FC SAN attached) ext3-formatted disks and also forward the messages to our Splunk servers. The problem is that the syslog server is writing these syslog messages as hundreds, sometimes thousands, of tiny ~4k writes per second back to our FC SAN - which can handle this workload right now but our FW traffic's going to be growing by at least a factor of 5000% (really) in coming months and that'll be a pain for the SAN, I want to fix the root cause before it's a problem. So I need some help figuring out a way of getting these writes cached or held-off in some way from the 'physical' disks so that the VMs fire off larger, but less frequent, writes - there's no way of avoiding these writes but there's no need for it to do so many tiny ones. I've looked at the various ext3 options, setting noatime and nodiratime but that's not made much of a dent in the problem. Obviously I'm investigating other file systems but thought I'd throw this out in case others have the same problem in the future. Oh and I can't just forward these messages to Splunk, our firewall team insist they're in their original format for diag purposes.

    Read the article

  • sub application and virtual directory file permissions

    - by Zeus
    I have a website setup in IIS7, exampledomain.com. Under the application exampledomain.com lives a sub application cms. In a rather convoluted way, we have content in our cms system in this sub-app, under cms\content\{generatedfoldername}. So to access an image in this content, the full URL would be http://www.exampledomain.com/cms/cms/content/{generatedfoldername}/image.jpg, (yes, cms twice...) and this works just fine. Now, we have a virtual directory under the parent website, called stuff which points at the content of the cms. So I should be able to get to the image using the url http://www.exampledomain.com/stuff/{generatedfoldername}/image.jpg. Unfortunately this gives a server 500 error "There is a problem with the resource you are looking for, and it cannot be displayed." Whilst you do have to log into the cms system to access any of the admin pages within, I don't think the image files are protected by login, or else the first example URL wouldn't work, right? Also it's a server 500 error, rather than a 403. I'm sure I must be missing something obvious here- will the virtual directory be using the permissions defined in the parent application, or the subapplication to which it is pointing? Or is there some other permissions I may have missed? Sorry, that was a bit long, thanks for reading all the way down here! (I also must point out that I'm pretty new to the server management stuff.) edit: also, we have <location path="." inheritInChildApplications="false"> specified in the webconfig of the parent app, so it's hopefully not the issue described in this config file hierarchy article.

    Read the article

  • File corruption (bad checksums) in large files copied to VMware guest

    - by AllanA
    In setting up a development lab, I've got a desktop system running ESXi 4.1.0 (free license) on SATA RAID 0 (already purchased and configured when I started this job; I'm open to hardware input as it pertains to my problem.) Its guests so far include two Win2008 Server R2 64-bit VMs and on Ubuntu 10.04 64-bit VM. I'm installing onto the Windows servers. We've been copying off some fairly large files (over a gigabyte) for an installation, hoping to install more quickly from a (virtual) hard drive than from the network for from BD-ROM. The problem is that they keep coming up with different checksums from the originals. The file sizes are the same, but md5sum reports different numbers (and so does the installer, as it refuses to continue when the checksums don't match.) I've tried copying directly from the BD-ROM (attaching the OS drive to the host system's physical drive). I've tried copying the large files onto a co-worker's Windows machine from his Blu-Ray drive; when I do that, the checksums match. But when I copy from his machine to the VM guest over a network share, the checksums no longer match. Thinking this meant a corrupt destination drive, I deleted it in vSphere and added another freshly created drive. The problem persists. I'm not sure what to try next.

    Read the article

  • nginx configuration file explained

    - by Chris Muench
    I have a few questions about this configuration file "default" in /etc/nginx/sites-enabled. It is shown below. server { root /usr/share/nginx/www; index index.html index.htm; # Make site accessible from http://localhost/ server_name localhost; location / { proxy_pass http://127.0.0.1:8080; } location /doc { root /usr/share; autoindex on; allow 127.0.0.1; deny all; } location /images { root /usr/share; autoindex off; } } There is no "Listen" directive, how does it know to default to 80 The server_name is localhost, how does another domain work? Why is the location directive embedded in the server directive? Does that mean these locations ONLY apply to this server? None of my configs have listen 80 default_server; how does nginx then pick what configuration to use?

    Read the article

  • Script apparently changing file permissions on Mac OS to 000

    - by half_bit
    I wrote a little shellscript that helps installing a web application. The script itself just downloads a zip archive, extracts it and changes the permissions of the extracted files to the one needed to run the webapp. The problem now is that some users reported that after running my script, all the permissions of every file in their home directory or even on their whole computer changed to 000 (except the actual unzipped files which do have the correct permissions). The only lines in my script actually doing IO are these: URL="http://foo.com/" FILENAME="some.zip" curl --silent "$URL$FILENAME" -o $FILENAME > /dev/null echo "Unzipping...\c" if unzip -oqq $FILENAME > /dev/null then chmod -R 777 app/tmp app/webroot app/Config/database* app/configuration* chown -R www:www * rm $FILENAME echo "\t\t\tOK" exit 0 else echo "\t\t\tERROR" exit 1 fi I seriously can't explain this to myself. How can this even be possible? It is entirely possible that the users accidentally ran the script in their home directory, but that still wouldn't explain why the permissions where set to 000, not www/777.

    Read the article

  • Create xml file to be used in wordpress post import

    - by adedoy
    Alright here is the thing, I have this site that was once wordpress but have been converted into 70+ static pages, the admin is deleted and the whole site is static(which means every page is in index.html), I want to create a script that makes an xml so that I will just have to import it in the new wordpress install. So far, I am able to create an XML but it only imports one post. The data source is the URL of a page and I use jquery $get to filter only to gather the post of a given archive. //html <input type="text" class="full_path"> <input type="button" value="Get Data" class="getdata"> //script $('.getdata').click(function(){ $.get($('.full_path').val(), function(data) { post = $(data).find('div [style*="width:530px;"]'); $('.result').html(post.html()); }); });//get Data Through AJAX I send the cleaned data into a php below that creates the XML: $file = 'newpost.xml'; $post_data = $_REQUEST['post_data']; // Open the file to get existing content $current = file_get_contents($file); // Append a new post to the file $catStr = ''; if(isset($post_data['categories']) && count($post_data['categories']) > 0){ foreach($post_data['categories'] as $category) { $catStr .= '<category domain="category" nicename="'.$category.'"><![CDATA['.$category.']]></category>'; } } $tagStr = ''; if(isset($post_data['tags']) && count($post_data['tags']) > 0){ foreach($post_data['tags'] as $tag) { $tagStr = '<category domain="post_tag" nicename="'.$tag.'"><![CDATA['.$tag.']]></category>'; } } $post_name = str_replace(' ','-',$post_data["title"]); $post_name = str_replace(array('"','/',':','.',',','[',']','“','”'),'',strtolower($post_name)); $post_date = '2011-4-0'.rand(1, 29).''.rand(1, 12).':'.rand(1, 59).':'.rand(1, 59); $pubTime = rand(1, 12).':'.rand(1, 59).':'.rand(1, 59).' +0000'; $post = ' <item> <title>'.$post_data["title"].'</title> <link>'.$post_data["link"].'</link> <pubDate>'.$post_data["date"].' '.$pubTime.'</pubDate> <dc:creator>admin</dc:creator> <guid isPermaLink="false">http://localhost/saunders/?p=1</guid> <description></description> <content:encoded><![CDATA['.$post_data["content"].']]></content:encoded> <excerpt:encoded><![CDATA[]]></excerpt:encoded> <wp:post_id>1</wp:post_id> <wp:post_date>'.$post_date.'</wp:post_date> <wp:post_date_gmt>'.$post_date.'</wp:post_date_gmt> <wp:comment_status>open</wp:comment_status> <wp:ping_status>open</wp:ping_status> <wp:post_name>'.$post_name.'</wp:post_name> <wp:status>publish</wp:status> <wp:post_parent>0</wp:post_parent> <wp:menu_order>0</wp:menu_order> <wp:post_type>post</wp:post_type> <wp:post_password></wp:post_password> <wp:is_sticky>0</wp:is_sticky> '.$catStr.' '.$tagStr.' <wp:postmeta> <wp:meta_key>_edit_last</wp:meta_key> <wp:meta_value><![CDATA[1]]></wp:meta_value> </wp:postmeta> </item> '; // Write the contents back to the file with the appended post file_put_contents($file, $current.$post); After being appended I add the code below to complete the xml rss tag </channel> </rss> If I look and compare the xml file of one that is exported from a wordpress site, I see little difference. Please HELP!! here is a sample of a generated xml: <?xml version="1.0" encoding="UTF-8" ?> <rss version="2.0" xmlns:excerpt="http://wordpress.org/export/1.2/excerpt/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:wp="http://wordpress.org/export/1.2/" > <channel> <title>lols why</title> <link>http://localhost/lols</link> <description>Just another WordPress site</description> <pubDate>Wed, 03 Oct 2012 04:24:04 +0000</pubDate> <language>en-US</language> <wp:wxr_version>1.2</wp:wxr_version> <wp:base_site_url>http://localhost/lols</wp:base_site_url> <wp:base_blog_url>http://localhost/lols</wp:base_blog_url> <wp:author><wp:author_id>1</wp:author_id><wp:author_login>adedoy</wp:author_login><wp:author_email>[email protected]</wp:author_email><wp:author_display_name><![CDATA[adedoy]]></wp:author_display_name><wp:author_first_name><![CDATA[]]></wp:author_first_name><wp:author_last_name><![CDATA[]]></wp:author_last_name></wp:author> <generator>http://wordpress.org/?v=3.4.1</generator> <item> <title>Sample lift?</title> <link>../../breast-lift/delaware-breast-surgery-do-i-need-a-breast-lift/</link> <pubDate>Wed, 03 Oct 2012 9:29:16 +0000</pubDate> <dc:creator>admin</dc:creator> <guid isPermaLink="false">http://localhost/lols/?p=1</guid> <description></description> <content:encoded><![CDATA[<p>sample</p>]]></content:encoded> <excerpt:encoded><![CDATA[]]></excerpt:encoded> <wp:post_id>1</wp:post_id> <wp:post_date>2011-4-0132:45:4</wp:post_date> <wp:post_date_gmt>2011-4-0132:45:4</wp:post_date_gmt> <wp:comment_status>open</wp:comment_status> <wp:ping_status>open</wp:ping_status> <wp:post_name>sample-lift?</wp:post_name> <wp:status>publish</wp:status> <wp:post_parent>0</wp:post_parent> <wp:menu_order>0</wp:menu_order> <wp:post_type>post</wp:post_type> <wp:post_password></wp:post_password> <wp:is_sticky>0</wp:is_sticky> <category domain="category" nicename="Sample Lift"><![CDATA[Sample Lift]]></category><category domain="category" nicename="Sample Procedures"><![CDATA[Yeah Procedures]]></category> <category domain="post_tag" nicename="delaware"><![CDATA[delaware]]></category> <wp:postmeta> <wp:meta_key>_edit_last</wp:meta_key> <wp:meta_value><![CDATA[1]]></wp:meta_value> </wp:postmeta> </item> <item> <title>lalalalalala</title> <link>../../administrative-tips-for-surgery/delaware-cosmetic-surgery-a-better-experience/</link> <pubDate>Wed, 03 Oct 2012 3:20:43 +0000</pubDate> <dc:creator>admin</dc:creator> <guid isPermaLink="false">http://localhost/lols/?p=1</guid> <description></description> <content:encoded><![CDATA[ lalalalalala ]]></content:encoded> <excerpt:encoded><![CDATA[]]></excerpt:encoded> <wp:post_id>1</wp:post_id> <wp:post_date>2011-4-0124:39:30</wp:post_date> <wp:post_date_gmt>2011-4-0124:39:30</wp:post_date_gmt> <wp:comment_status>open</wp:comment_status> <wp:ping_status>open</wp:ping_status> <wp:post_name>lalalalalala</wp:post_name> <wp:status>publish</wp:status> <wp:post_parent>0</wp:post_parent> <wp:menu_order>0</wp:menu_order> <wp:post_type>post</wp:post_type> <wp:post_password></wp:post_password> <wp:is_sticky>0</wp:is_sticky> <category domain="category" nicename="lalalalalala"><![CDATA[lalalalalala]]></category> <category domain="post_tag" nicename="oink"><![CDATA[oink]]></category> <wp:postmeta> <wp:meta_key>_edit_last</wp:meta_key> <wp:meta_value><![CDATA[1]]></wp:meta_value> </wp:postmeta> </item> </channel> </rss> Please tell me what am I missing....

    Read the article

  • Scheduling a RichCopy Jobs

    - by Ian B
    Anyone use the timer feature of RichCopy? I have a job that works fine when I manually start the job. However, when I schedule the job and click run, the app appears to be waiting for the scheduled time to elapse yet never fires. Interesting enough when I stop the job the copy starts. Anyone have any experience with using RichCopy timer? IanB

    Read the article

  • fatal error LNK1112: module machine type 'X86' conflicts with target machine type 'AMD64'

    - by KK
    Hi, I am using VS 2003 .Net on 32 bit XP OS. I have also installed "Microsoft Platform SDK" on my machine. Can I build vc++ application (binaries) targeted for 64 bit OS? I am using following project options : Name="VCLinkerTool" AdditionalOptions="/machine:AMD64 bufferoverflowU.lib" OutputFile="\bin\Release\MM64.dll" LinkIncremental="1" SuppressStartupBanner="TRUE" AdditionalLibraryDirectories="&quot;C:\Program Files\Microsoft Platform SDK\Lib\AMD64&quot;" GenerateDebugInformation="TRUE" ProgramDatabaseFile="\bin\Release\MM64.pdb" GenerateMapFile="TRUE" MapFileName="\bin\Release\MM64.map" MapExports="TRUE" MapLines="TRUE" OptimizeReferences="2" EnableCOMDATFolding="2" ImportLibrary=".\Release/MM64.lib" TargetMachine="0"/> I am getting following error: fatal error LNK1112: module machine type 'X86' conflicts with target machine type 'AMD64' Do I need to build project on 64 bit OS or I need to change project settings to resolve this error. Please help me to resolve this issue.

    Read the article

  • Could not load file or assembly 'System.Web.Ajax, Version=3.0.31106.0

    - by Jonesy
    HI folks, I have a .net application (vb.net) and I'm using the ajax control toolkit. It works fine on my production machine but when I upload it to the host (fasthosts) i get this error: Could not load file or assembly 'System.Web.Ajax, Version=3.0.31106.0, Culture=neutral, PublicKeyToken=28f01b0e84b6d53e' or one of its dependencies. The module was expected to contain an assembly manifest. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.BadImageFormatException: Could not load file or assembly 'System.Web.Ajax, Version=3.0.31106.0, Culture=neutral, PublicKeyToken=28f01b0e84b6d53e' or one of its dependencies. The module was expected to contain an assembly manifest. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Assembly Load Trace: The following information can be helpful to determine why the assembly 'System.Web.Ajax, Version=3.0.31106.0, Culture=neutral, PublicKeyToken=28f01b0e84b6d53e' could not be loaded. WRN: Assembly binding logging is turned OFF. To enable assembly bind failure logging, set the registry value [HKLM\Software\Microsoft\Fusion!EnableLog] (DWORD) to 1. Note: There is some performance penalty associated with assembly bind failure logging. To turn this feature off, remove the registry value [HKLM\Software\Microsoft\Fusion!EnableLog]. Stack Trace: [BadImageFormatException: Could not load file or assembly 'System.Web.Ajax, Version=3.0.31106.0, Culture=neutral, PublicKeyToken=28f01b0e84b6d53e' or one of its dependencies. The module was expected to contain an assembly manifest.] AjaxControlToolkit.ToolkitScriptManager.ApplyAssembly(ScriptReference script, Boolean isComposite) +0 AjaxControlToolkit.ToolkitScriptManager.OnResolveScriptReference(ScriptReferenceEventArgs e) +167 System.Web.UI.ScriptManager.RegisterScripts() +191 System.Web.UI.ScriptManager.OnPagePreRenderComplete(Object sender, EventArgs e) +113 System.Web.UI.Page.OnPreRenderComplete(EventArgs e) +8698462 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +1029 Here is my web.conf file. Its very simple: <system.web> <customErrors mode="Off"/> <compilation debug="true"> <assemblies> <add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add assembly="System.Web.Extensions.Design, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add assembly="System.Design, Version=2.0.0.0, Culture=neutral, PublicKeyToken=B03F5F7F11D50A3A"/> <add assembly="System.Windows.Forms, Version=2.0.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/></assemblies></compilation></system.web> Does anyone know whats up? -- Billy

    Read the article

  • Reading .ppt (MS PowerPoint) file in Cocoa Touch

    - by Biranchi
    Hi All, Any idea how to read a .ppt file in Cocoa Touch ? I tried to load the contents of the file in UIWebView but it didn't work. Here is the code : [aWebView loadData:[NSData dataWithContentsOfFile:filePath] MIMEType:@"application/vnd.ms-powerpoint" textEncodingName:@"utf-8" baseURL:[NSURL fileURLWithPath:filePath]]; [powerWeb loadData:[NSData dataWithContentsOfFile:filePath] MIMEType:@"application/vnd.ms-powerpoint" textEncodingName:@"utf-8" baseURL:[NSURL fileURLWithPath:filePath]]; All suggestions are highly appreciated. Thanks

    Read the article

  • Alternative to Amazon’s S3 service?

    - by Cory
    Just wondering if there is good alternative to Amazon's S3 service? I like S3 but the bandwidth cost is high. I looked at CouldFiles from Rackspace but the cost is even higher. I don't mind prepaying or having monthly payment in order to reduce the bandwidth cost greatly. Thank you for any help

    Read the article

  • Running batch file on remote machine from hudson server using PSEXEC

    - by vishu
    I am new to Hudson with PSEXEC, i am using hudson in my computer, i want to run batch file on remote computer from hudson build. I used PSEXEC to run batch file on remote computer,when i executed from command promt it working successfully.But same i did from Hudson build its hanging..it's not doing anything.so please give any suggestions is there any other way we can handle this. I want to do this quikly...urgent Anyones help is appreciable thanks in advance.

    Read the article

  • Play an audio file using RemoteIO and Audio Unit

    - by NeilMonday
    I am looking at Apple's 'aurioTouch' example for the iPhone and I would like to play an mp3 or wav instead of using the built in mic. I am very new to the audio portion of iPhone programming, but I think I need to modify the SetupRemoteIO(...) function and replace the AudioComponent named 'comp' with a custom AudioComponent that plays a file. Basically I want the app to function exactly the same as the original, but with an audio file as the input instead of the mic.

    Read the article

  • Transform OpenGL coordinates to lower UIView coordinates

    - by John Qualis
    Hi, I am new to OpenGL over iPhone. I am developing an iPhone app similar to a barcode reader but with an extra OpenGL layer. The bottommost layer is UIImagePickerController, then I use UIView on top and draw a rectangle at certain co-ordinates on the iphone screen. So far everything is OK. Then I am trying to draw an OpenGL 3-D model in that rectangle. I am able to load a 3-D model in the iPhone based on this code here - http://iphonedevelopment.blogspot.com/2008/12/start-of-wavefront-obj-file-loader.html I am not able to transform the co-ordinates of the rectangle into OpenGL co-ordinates. Appreciate any help. Do I need to use a matrix to translate the currentPosition of the 3-D model so it is drawn within myRect? The code is given below.. Appreciate any help/pointers in this regards. John (void)drawView:(GLView*)view { static GLfloat rotation = 0.0; glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); glColor4f(0.0, 0.5, 1.0, 1.0); // The coordinates of the rectangle are myRect.x, // myRect.y, myRect.width, myRect.height // Do I need a transform matrix here? //glOrthof(-160.0f, 160.0f, -240.0f, 240.0f, -1.0f, 1.0f); [plane drawSelf]; .... } -(void)setupView:(GLView*)view { const GLfloat zNear = 0.01, zFar = 1000.0, fieldOfView = 45.0; GLfloat size; glEnable(GL_DEPTH_TEST); glMatrixMode(GL_PROJECTION); //glMatrixMode(GL_MODELVIEW); size = zNear * tanf(DEGREES_TO_RADIANS(fieldOfView) / 2.0); CGRect rect = view.bounds; glFrustumf(-size, size, -size / (rect.size.width / rect.size.height), size / (rect.size.width / rect.size.height), zNear, zFar); glViewport(0, 0, rect.size.width, rect.size.height); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glClearColor(0.0f, 0.0f, 0.0f, 0.0f); }

    Read the article

< Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >