Search Results

Search found 14435 results on 578 pages for 'disk usage'.

Page 549/578 | < Previous Page | 545 546 547 548 549 550 551 552 553 554 555 556  | Next Page >

  • HP-UX: libstd_v2 in stack trace of JNI code compiled with g++

    - by Miguel Rentes
    Hello, uname -mr: B.11.23 ia64 g++ --version: g++ (GCC) 4.4.0 java -version: Java(TM) SE Runtime Environment (build 1.6.0.06-jinteg_20_jan_2010_05_50-b00) Java HotSpot(TM) Server VM (build 14.3-b01-jre1.6.0.06-rc1, mixed mode) I'm trying to run a Java application that uses JNI. It is crashing inside the JNI code with the following (abbreviated) stack trace: (0) 0xc0000000249353e0 VMError::report_and_die{_ZN7VMError14report_and_dieEv} + 0x440 at /CLO/Components/JAVA_HOTSPOT/Src/src/share/vm/utilities/vmError.cpp:738 [/opt/java6/jre/lib/IA64W/server/libjvm.so] (1) 0xc000000024559240 os::Hpux::JVM_handle_hpux_signal{_ZN2os4Hpux22JVM_handle_hpux_signalEiP9 __siginfoPvi} + 0x760 at /CLO/Components/JAVA_HOTSPOT/Src/src/os_cpu/hp-ux_ia64/vm/os_hp-ux_ia64.cpp:1051 [/opt/java6/jre/lib/IA64W/server/libjvm.so] (2) 0xc0000000245331c0 os::Hpux::signalHandler{_ZN2os4Hpux13signalHandlerEiP9__siginfoPv} + 0x80 at /CLO/Components/JAVA_HOTSPOT/Src/src/os/hp-ux/vm/os_hp-ux.cpp:4295 [/opt/java6/jre/lib/IA64W/server/libjvm.so] (3) 0xe00000010e002620 ---- Signal 11 (SIGSEGV) delivered ---- (4) 0xc0000000000d2d20 __pthread_mutex_lock + 0x400 at /ux/core/libs/threadslibs/src/common/pthreads/mutex.c:3895 [/usr/lib/hpux64/libpthread.so.1] (5) 0xc000000000342e90 __thread_mutex_lock + 0xb0 at ../../../../../core/libs/libc/shared_em_64/../core/threads/wrappers1.c:273 [/usr/lib/hpux64/libc.so.1] (6) 0xc00000000177dff0 _HPMutexWrapper::lock{_ZN15_HPMutexWrapper4lockEPv} + 0x90 [/usr/lib/hpux64/libstd_v2.so.1] (7) 0xc0000000017e9960 std::basic_string,std::allocator{_ZNSsC1ERKSs} + 0x80 [/usr/lib/hpux64/libstd_v2.so.1] (8) 0xc000000008fd9fe0 JniString::str{_ZNK9JniString3strEv} + 0x50 at eg_handler_jni.cxx:50 [/soft/bus-7_0/lib/libbus_registry_jni.so.7.0.0] (9) 0xc000000008fd7060 pt_efacec_se_aut_frk_cmp_registry_REGHandler::getKey{_ZN44pt_efacec_se_aut_frk_cmp_registry_REGHandler6getKeyEP8_jstringi} + 0xa0 [/soft/bus-7_0/lib/libbus_registry_jni.so.7.0.0] (10) 0xc000000008fd17f0 Java_pt_efacec_se_aut_frk_cmp_registry_REGHandler_getKey__Ljava_lang_String_2I + 0xa0 [/soft/bus-7_0/lib/libbus_registry_jni.so.7.0.0] (11) 0x9fffffffdf400ed0 Internal error (-3) while unwinding stack [/CLO/Components/JAVA_HOTSPOT/Src/src/os_cpu/hp-ux_ia64/vm/thread_hp-ux_ia64.cpp:142] This JNI code and dependencies are being compiled using g++, are multithreaded and 64 bit (-pthread -mlp64 -shared -fPIC). The LD_LIBRARY_PATH environment variable is set the dependencies location, and running ldd on the JNI shared libraries finds them all: ldd libbus_registry_jni.so: libefa-d.so.7 = /soft/bus-7_0/lib/libefa-d.so.7 libbus_registry-d.so.7 = /soft/bus-7_0/lib/libbus_registry-d.so.7 libboost_thread-gcc44-mt-d-1_41.so = /usr/local/lib/libboost_thread-gcc44-mt-d-1_41.so libboost_system-gcc44-mt-d-1_41.so = /usr/local/lib/libboost_system-gcc44-mt-d-1_41.so libboost_regex-gcc44-mt-d-1_41.so = /usr/local/lib/libboost_regex-gcc44-mt-d-1_41.so librt.so.1 = /usr/lib/hpux64/librt.so.1 libstdc++.so.6 = /opt/hp-gcc-4.4.0/lib/gcc/ia64-hp-hpux11.23/4.4.0/../../../hpux64/libstdc++.so.6 libm.so.1 = /usr/lib/hpux64/libm.so.1 libgcc_s.so.0 = /opt/hp-gcc-4.4.0/lib/gcc/ia64-hp-hpux11.23/4.4.0/../../../hpux64/libgcc_s.so.0 libunwind.so.1 = /usr/lib/hpux64/libunwind.so.1 librt.so.1 = /usr/lib/hpux64/librt.so.1 libm.so.1 = /usr/lib/hpux64/libm.so.1 libunwind.so.1 = /usr/lib/hpux64/libunwind.so.1 libdl.so.1 = /usr/lib/hpux64/libdl.so.1 libunwind.so.1 = /usr/lib/hpux64/libunwind.so.1 libc.so.1 = /usr/lib/hpux64/libc.so.1 libuca.so.1 = /usr/lib/hpux64/libuca.so.1 Looking at the stack trace, it seams odd that, although ldd list g++'s libstdc++ is being used, the std:string copy c'tor being reported as used is the one in libstd_v2, the implementation provided by aCC. The crash happens in the following code, when method str() returns: class JniString { std::string m_utf8; public: JniString(JNIEnv* env, jstring instance) { const char* utf8Chars = env-GetStringUTFChars(instance, 0); if (utf8Chars == 0) { env-ExceptionClear(); // RPF throw std::runtime_error("GetStringUTFChars returned 0"); } m_utf8.assign(utf8Chars); env-ReleaseStringUTFChars(instance, utf8Chars); } std::string str() const { return m_utf8; } }; Simultaneous usage of the two C++ implementations could likely be a reason for the crash, but that should not be happening. Any ideas?

    Read the article

  • Is this asking too much of a browser?

    - by Matt Ball
    I'm embedding a large array in <script> tags in my HTML, like this (nothing surprising): <script> var largeArray = [/* lots of stuff in here */]; </script> In this particular example, the array has 210,000 elements. That's well below the theoretical maximum of 231 - by 4 orders of magnitude. Here's the fun part: if I save JS source for the array to a file, that file is 44 megabytes (46,573,399 bytes, to be exact). If you want to see for yourself, you can download it from my Dropbox. (All the data in there is canned, so much of it is repeated. This will not be the case in production.) Now, I'm really not concerned about serving that much data. My server gzips its responses, so it really doesn't take all that long to get the data over the wire. However, there is a really nasty tendency for the page, once loaded, to crash the browser. I'm not testing at all in IE (this is an internal tool). My primary targets are Chrome 8 and Firefox 3.6. In Firefox, I can see a reasonably useful error in the console: Error: script stack space quota is exhausted In Chrome, I simply get the sad-tab page: Cut to the chase, already Is this really too much data for our modern, "high-performance" browsers to handle? Is there anything I can do* to gracefully handle this much data? Incidentally, I was able to get this to work (read: not crash the tab) on-and-off in Chrome. I really thought that Chrome, at least, was made of tougher stuff, but apparently I was wrong... Edit 1 @Crayon: I wasn't looking to justify why I'd like to dump this much data into the browser at once. Short version: either I solve this one (admittedly not-that-easy) problem, or I have to solve a whole slew of other problems. I'm opting for the simpler approach for now. @various: right now, I'm not especially looking for ways to actually reduce the number of elements in the array. I know I could implement Ajax paging or what-have-you, but that introduces its own set of problems for me in other regards. @Phrogz: each element looks something like this: {dateTime:new Date(1296176400000), terminalId:'terminal999', 'General___BuildVersion':'10.05a_V110119_Beta', 'SSM___ExtId':26680, 'MD_CDMA_NETLOADER_NO_BCAST___Valid':'false', 'MD_CDMA_NETLOADER_NO_BCAST___PngAttempt':0} @Will: but I have a computer with a 4-core processor, 6 gigabytes of RAM, over half a terabyte of disk space ...and I'm not even asking for the browser to do this quickly - I'm just asking for it to work at all! ? *other than the obvious: sending less data to the browser

    Read the article

  • multiple-inheritance substitution

    - by Luigi
    I want to write a module (framework specific), that would wrap and extend Facebook PHP-sdk (https://github.com/facebook/php-sdk/). My problem is - how to organize classes, in a nice way. So getting into details - Facebook PHP-sdk consists of two classes: BaseFacebook - abstract class with all the stuff sdk does Facebook - extends BaseFacebook, and implements parent abstract persistance-related methods with default session usage Now I have some functionality to add: Facebook class substitution, integrated with framework session class shorthand methods, that run api calls, I use mostly (through BaseFacebook::api()), authorization methods, so i don't have to rewrite this logic every time, configuration, sucked up from framework classes, insted of passed as params caching, integrated with framework cache module I know something has gone very wrong, because I have too much inheritance that doesn't look very normal.Wrapping everything in one "complex extension" class also seems too much. I think I should have few working togheter classes - but i get into problems like: if cache class doesn't really extend and override BaseFacebook::api() method - shorthand and authentication classes won't be able to use the caching. Maybe some kind of a pattern would be right in here? How would you organize these classes and their dependencies? EDIT 04.07.2012 Bits of code, related to the topic: This is how the base class of Facebook PHP-sdk: abstract class BaseFacebook { // ... some methods public function api(/* polymorphic */) { // ... method, that makes api calls } public function getUser() { // ... tries to get user id from session } // ... other methods abstract protected function setPersistentData($key, $value); abstract protected function getPersistentData($key, $default = false); // ... few more abstract methods } Normaly Facebook class extends it, and impelements those abstract methods. I replaced it with my substitude - Facebook_Session class: class Facebook_Session extends BaseFacebook { protected function setPersistentData($key, $value) { // ... method body } protected function getPersistentData($key, $default = false) { // ... method body } // ... implementation of other abstract functions from BaseFacebook } Ok, then I extend this more with shorthand methods and configuration variables: class Facebook_Custom extends Facebook_Session { public funtion __construct() { // ... call parent's constructor with parameters from framework config } public function api_batch() { // ... a wrapper for parent's api() method return $this->api('/?batch=' . json_encode($calls), 'POST'); } public function redirect_to_auth_dialog() { // method body } // ... more methods like this, for common queries / authorization } I'm not sure, if this isn't too much for a single class ( authorization / shorthand methods / configuration). Then there comes another extending layer - cache: class Facebook_Cache extends Facebook_Custom { public function api() { $cache_file_identifier = $this->getUser(); if(/* cache_file_identifier is not null and found a valid file with cached query result */) { // return the result } else { try { // call Facebook_Custom::api, cache and return the result } catch(FacebookApiException $e) { // if Access Token is expired force refreshing it parent::redirect_to_auth_dialog(); } } } // .. some other stuff related to caching } Now this pretty much works. New instance of Facebook_Cache gives me all the functionality. Shorthand methods from Facebook_Custom use caching, because Facebook_Cache overwrited api() method. But here is what is bothering me: I think it's too much inheritance. It's all very tight coupled - like look how i had to specify 'Facebook_Custom::api' instead of 'parent:api', to avoid api() method loop on Facebook_Cache class extending. Overall mess and ugliness. So again, this works but I'm just asking about patterns / ways of doing this in a cleaner and smarter way.

    Read the article

  • Problems doing asynch operations in C# using Mutex.

    - by firoso
    I've tried this MANY ways, here is the current iteration. I think I've just implemented this all wrong. What I'm trying to accomplish is to treat this Asynch result in such a way that until it returns AND I finish with my add-thumbnail call, I will not request another call to imageProvider.BeginGetImage. To Clarify, my question is two-fold. Why does what I'm doing never seem to halt at my Mutex.WaitOne() call, and what is the proper way to handle this scenario? /// <summary> /// re-creates a list of thumbnails from a list of TreeElementViewModels (directories) /// </summary> /// <param name="list">the list of TreeElementViewModels to process</param> public void BeginLayout(List<AiTreeElementViewModel> list) { // *removed code for canceling and cleanup from previous calls* // Starts the processing of all folders in parallel. Task.Factory.StartNew(() => { thumbnailRequests = Parallel.ForEach<AiTreeElementViewModel>(list, options, ProcessFolder); }); } /// <summary> /// Processes a folder for all of it's image paths and loads them from disk. /// </summary> /// <param name="element">the tree element to process</param> private void ProcessFolder(AiTreeElementViewModel element) { try { var images = ImageCrawler.GetImagePaths(element.Path); AsyncCallback callback = AddThumbnail; foreach (var image in images) { Console.WriteLine("Attempting Enter"); synchMutex.WaitOne(); Console.WriteLine("Entered"); var result = imageProvider.BeginGetImage(callback, image); } } catch (Exception exc) { Console.WriteLine(exc.ToString()); // TODO: Do Something here. } } /// <summary> /// Adds a thumbnail to the Browser /// </summary> /// <param name="result">an async result used for retrieving state data from the load task.</param> private void AddThumbnail(IAsyncResult result) { lock (Thumbnails) { try { Stream image = imageProvider.EndGetImage(result); string filename = imageProvider.GetImageName(result); string imagePath = imageProvider.GetImagePath(result); var imageviewmodel = new AiImageThumbnailViewModel(image, filename, imagePath); thumbnailHash[imagePath] = imageviewmodel; HostInvoke(() => Thumbnails.Add(imageviewmodel)); UpdateChildZoom(); //synchMutex.ReleaseMutex(); Console.WriteLine("Exited"); } catch (Exception exc) { Console.WriteLine(exc.ToString()); // TODO: Do Something here. } } }

    Read the article

  • Cannot install Apache Web Server on Ubuntu, Amazon WS

    - by Eugene Retunsky
    I enter command apt-get install apache2 --fix-missing (under the root user) and this is what I receive: Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap ssl-cert Suggested packages: apache2-doc apache2-suexec apache2-suexec-custom openssl-blacklist The following NEW packages will be installed: apache2 apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap ssl-cert 0 upgraded, 10 newly installed, 0 to remove and 36 not upgraded. Need to get 2,945 kB/3,141 kB of archives. After this operation, 10.4 MB of additional disk space will be used. Do you want to continue [Y/n]? y Err http://us-west-1.ec2.archive.ubuntu.com/ubuntu/ oneiric-updates/main apache2.2-bin i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 10.161.51.124 80] Err http://security.ubuntu.com/ubuntu/ oneiric-security/main apache2.2-bin i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 91.189.92.167 80] Err http://security.ubuntu.com/ubuntu/ oneiric-security/main apache2-utils i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 91.189.92.167 80] Err http://security.ubuntu.com/ubuntu/ oneiric-security/main apache2.2-common i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 91.189.92.167 80] Err http://security.ubuntu.com/ubuntu/ oneiric-security/main apache2-mpm-worker i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 91.189.92.167 80] Err http://security.ubuntu.com/ubuntu/ oneiric-security/main apache2 i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 91.189.92.167 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/a/apache2/apache2.2-bin_2.2.20-1ubuntu1.1_i386.deb 404 Not Found [IP: 91.189.92.167 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/a/apache2/apache2-utils_2.2.20-1ubuntu1.1_i386.deb 404 Not Found [IP: 91.189.92.167 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/a/apache2/apache2.2-common_2.2.20-1ubuntu1.1_i386.deb 404 Not Found [IP: 91.189.92.167 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/a/apache2/apache2-mpm-worker_2.2.20-1ubuntu1.1_i386.deb 404 Not Found [IP: 91.189.92.167 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/a/apache2/apache2_2.2.20-1ubuntu1.1_i386.deb 404 Not Found [IP: 91.189.92.167 80] Unable to correct missing packages. E: Aborting install. Any help is appreciated.

    Read the article

  • How to do inclusive range queries when only half-open range is supported (ala SortedMap.subMap)

    - by polygenelubricants
    On SortedMap.subMap This is the API for SortedMap<K,V>.subMap: SortedMap<K,V> subMap(K fromKey, K toKey) : Returns a view of the portion of this map whose keys range from fromKey, inclusive, to toKey, exclusive. This inclusive lower bound, exclusive upper bound combo ("half-open range") is something that is prevalent in Java, and while it does have its benefits, it also has its quirks, as we shall soon see. The following snippet illustrates a simple usage of subMap: static <K,V> SortedMap<K,V> someSortOfSortedMap() { return Collections.synchronizedSortedMap(new TreeMap<K,V>()); } //... SortedMap<Integer,String> map = someSortOfSortedMap(); map.put(1, "One"); map.put(3, "Three"); map.put(5, "Five"); map.put(7, "Seven"); map.put(9, "Nine"); System.out.println(map.subMap(0, 4)); // prints "{1=One, 3=Three}" System.out.println(map.subMap(3, 7)); // prints "{3=Three, 5=Five}" The last line is important: 7=Seven is excluded, due to the exclusive upper bound nature of subMap. Now suppose that we actually need an inclusive upper bound, then we could try to write a utility method like this: static <V> SortedMap<Integer,V> subMapInclusive(SortedMap<Integer,V> map, int from, int to) { return (to == Integer.MAX_VALUE) ? map.tailMap(from) : map.subMap(from, to + 1); } Then, continuing on with the above snippet, we get: System.out.println(subMapInclusive(map, 3, 7)); // prints "{3=Three, 5=Five, 7=Seven}" map.put(Integer.MAX_VALUE, "Infinity"); System.out.println(subMapInclusive(map, 5, Integer.MAX_VALUE)); // {5=Five, 7=Seven, 9=Nine, 2147483647=Infinity} A couple of key observations need to be made: The good news is that we don't care about the type of the values, but... subMapInclusive assumes Integer keys for to + 1 to work. A generic version that also takes e.g. Long keys is not possible (see related questions) Not to mention that for Long, we need to compare against Long.MAX_VALUE instead Overloads for the numeric primitive boxed types Byte, Character, etc, as keys, must all be written individually A special check need to be made for toInclusive == Integer.MAX_VALUE, because +1 would overflow, and subMap would throw IllegalArgumentException: fromKey > toKey This, generally speaking, is an overly ugly and overly specific solution What about String keys? Or some unknown type that may not even be Comparable<?>? So the question is: is it possible to write a general subMapInclusive method that takes a SortedMap<K,V>, and K fromKey, K toKey, and perform an inclusive-range subMap queries? Related questions Are upper bounds of indexed ranges always assumed to be exclusive? Is it possible to write a generic +1 method for numeric box types in Java? On NavigableMap It should be mentioned that there's a NavigableMap.subMap overload that takes two additional boolean variables to signify whether the bounds are inclusive or exclusive. Had this been made available in SortedMap, then none of the above would've even been asked. So working with a NavigableMap<K,V> for inclusive range queries would've been ideal, but while Collections provides utility methods for SortedMap (among other things), we aren't afforded the same luxury with NavigableMap. Related questions Writing a synchronized thread-safety wrapper for NavigableMap On API providing only exclusive upper bound range queries Does this highlight a problem with exclusive upper bound range queries? How were inclusive range queries done in the past when exclusive upper bound is the only available functionality?

    Read the article

  • Good Secure Backups Developers at Home

    - by slashmais
    What is a good, secure, method to do backups, for programmers who do research & development at home and cannot afford to lose any work? Conditions: The backups must ALWAYS be within reasonably easy reach. Internet connection cannot be guaranteed to be always available. The solution must be either FREE or priced within reason, and subject to 2 above. Status Report This is for now only considering free options. The following open-source projects are suggested in the answers (here & elsewhere): BackupPC is a high-performance, enterprise-grade system for backing up Linux, WinXX and MacOSX PCs and laptops to a server's disk. Storebackup is a backup utility that stores files on other disks. mybackware: These scripts were developed to create SQL dump files for basic disaster recovery of small MySQL installations. Bacula is [...] to manage backup, recovery, and verification of computer data across a network of computers of different kinds. In technical terms, it is a network based backup program. AutoDL 2 and Sec-Bk: AutoDL 2 is a scalable transport independant automated file transfer system. It is suitable for uploading files from a staging server to every server on a production server farm [...] Sec-Bk is a set of simple utilities to securely back up files to a remote location, even a public storage location. rsnapshot is a filesystem snapshot utility for making backups of local and remote systems. rbme: Using rsync for backups [...] you get perpetual incremental backups that appear as full backups (for each day) and thus allow easy restore or further copying to tape etc. Duplicity backs directories by producing encrypted tar-format volumes and uploading them to a remote or local file server. [...] uses librsync, [for] incremental archives Other Possibilities: Using a Distributed Version Control System (DVCS) such as Git(/Easy Git), Bazaar, Mercurial answers the need to have the backup available locally. Use free online storage space as a remote backup, e.g.: compress your work/backup directory and mail it to your gmail account. Strategies See crazyscot's answer

    Read the article

  • Mac OS - Built SVN from source, now Apache2 not loading sites

    - by Geuis
    This relates to another question I asked earlier today. I built SVN 1.6.2 from source. In the process, it has completely screwed up my dev environment. After I built SVN, Apache wasn't loading. It was giving me this error: Syntax error on line 117 of /private/etc/apache2/httpd.conf: Cannot load /usr/libexec /apache2/mod_dav_svn.so into server: dlopen(/usr/libexec/apache2/mod_dav_svn.so, 10): no suitable image found. Did find:\n\t/usr/libexec/apache2/mod_dav_svn.so: mach-o, but wrong architecture It appears that SVN over-wrote the old mod_dav_svn.so and I am not able to get it to build as FAT, and I can't recover whatever was originally there. I resolved this(temporarily?) by commenting out the line that was loading the mod_dav_svn.so and got Apache to start at this point. However, even though Apache is running I am now getting this error when trying to access my dev sites: Directory index forbidden by Options directive: /usr/share/tomcat6/webapps/ROOT/ I have Apache2 sitting in front of Tomcat6. I access my local dev site using the internal name "http://localthesite". I have had virtual directories set up that have worked until this SVN debacle. Tomcat is installed at /usr/local/apache-tomcat, and webapps is /usr/local/apache-tomcat/webapps. Our production servers deploy tomcat to /usr/share/tomcat6, so I have symlinks setup on my system to replicate this as well. These point back to the actual installation path. This has all been working fine as well. None of our configurations for Apache2, Tomcat, or .htaccess have changed. Over the weekend, I performed a "Repair Disk Permissions" on the system. This was before I discovered the mod_dav_svn.so problem. I have been reading up on this all morning and the most common answer is that there is an Options -Indexes set. We have this in a config file, but it was there before and when I removed it during testing, I still got the same errors from Apache. At this point, I'm assuming I either totally borked the native Apache2 installation on this Mac, or that there is a permissions error somewhere that I'm missing. The permissions error could be from the SVN installation, or from my repair process. Does anyone have any idea what could be the problem? I'm totally blocked right now and have no idea where to check next.

    Read the article

  • Clipplanes, vertex shaders and hardware vertex processing in Direct3D 9

    - by Igor
    Hi, I have an issue with clipplanes in my application that I can reproduce in a sample from DirectX SDK (February 2010). I added a clipplane to the HLSLwithoutEffects sample: ... D3DXPLANE g_Plane( 0.0f, 1.0f, 0.0f, 0.0f ); ... void SetupClipPlane(const D3DXMATRIXA16 & view, const D3DXMATRIXA16 & proj) { D3DXMATRIXA16 m = view * proj; D3DXMatrixInverse( &m, NULL, &m ); D3DXMatrixTranspose( &m, &m ); D3DXPLANE plane; D3DXPlaneNormalize( &plane, &g_Plane ); D3DXPLANE clipSpacePlane; D3DXPlaneTransform( &clipSpacePlane, &plane, &m ); DXUTGetD3D9Device()->SetClipPlane( 0, clipSpacePlane ); } void CALLBACK OnFrameMove( double fTime, float fElapsedTime, void* pUserContext ) { // Update the camera's position based on user input g_Camera.FrameMove( fElapsedTime ); // Set up the vertex shader constants D3DXMATRIXA16 mWorldViewProj; D3DXMATRIXA16 mWorld; D3DXMATRIXA16 mView; D3DXMATRIXA16 mProj; mWorld = *g_Camera.GetWorldMatrix(); mView = *g_Camera.GetViewMatrix(); mProj = *g_Camera.GetProjMatrix(); mWorldViewProj = mWorld * mView * mProj; g_pConstantTable->SetMatrix( DXUTGetD3D9Device(), "mWorldViewProj", &mWorldViewProj ); g_pConstantTable->SetFloat( DXUTGetD3D9Device(), "fTime", ( float )fTime ); SetupClipPlane( mView, mProj ); } void CALLBACK OnFrameRender( IDirect3DDevice9* pd3dDevice, double fTime, float fElapsedTime, void* pUserContext ) { // If the settings dialog is being shown, then // render it instead of rendering the app's scene if( g_SettingsDlg.IsActive() ) { g_SettingsDlg.OnRender( fElapsedTime ); return; } HRESULT hr; // Clear the render target and the zbuffer V( pd3dDevice->Clear( 0, NULL, D3DCLEAR_TARGET | D3DCLEAR_ZBUFFER, D3DCOLOR_ARGB( 0, 45, 50, 170 ), 1.0f, 0 ) ); // Render the scene if( SUCCEEDED( pd3dDevice->BeginScene() ) ) { pd3dDevice->SetVertexDeclaration( g_pVertexDeclaration ); pd3dDevice->SetVertexShader( g_pVertexShader ); pd3dDevice->SetStreamSource( 0, g_pVB, 0, sizeof( D3DXVECTOR2 ) ); pd3dDevice->SetIndices( g_pIB ); pd3dDevice->SetRenderState( D3DRS_CLIPPLANEENABLE, D3DCLIPPLANE0 ); V( pd3dDevice->DrawIndexedPrimitive( D3DPT_TRIANGLELIST, 0, 0, g_dwNumVertices, 0, g_dwNumIndices / 3 ) ); pd3dDevice->SetRenderState( D3DRS_CLIPPLANEENABLE, 0 ); RenderText(); V( g_HUD.OnRender( fElapsedTime ) ); V( pd3dDevice->EndScene() ); } } When I rotate the camera I have different visual results when using hardware and software vertex processing. In software vertex processing mode or when using the reference device the clipping plane works fine as expected. In hardware mode it seems to rotate with the camera. If I remove the call to RenderText(); from OnFrameRender then hardware rendering also works fine. Further debugging reveals that the problem is in ID3DXFont::DrawText. I have this issue in Windows Vista and Windows 7 but not in Windows XP. I tested the code with the latest NVidia and ATI drivers in all three OSes on different PCs. Is it a DirectX issue? Or incorrect usage of clipplanes? Thanks Igor

    Read the article

  • Why is Oracle using a skip scan for this query?

    - by Jason Baker
    Here's the tkprof output for a query that's running extremely slowly (WARNING: it's long :-) ): SELECT mbr_comment_idn, mbr_crt_dt, mbr_data_source, mbr_dol_bl_rmo_ind, mbr_dxcg_ctl_member, mbr_employment_start_dt, mbr_employment_term_dt, mbr_entity_active, mbr_ethnicity_idn, mbr_general_health_status_code, mbr_hand_dominant_code, mbr_hgt_feet, mbr_hgt_inches, mbr_highest_edu_level, mbr_insd_addr_idn, mbr_insd_alt_id, mbr_insd_name, mbr_insd_ssn_tin, mbr_is_smoker, mbr_is_vip, mbr_lmbr_first_name, mbr_lmbr_last_name, mbr_marital_status_cd, mbr_mbr_birth_dt, mbr_mbr_death_dt, mbr_mbr_expired, mbr_mbr_first_name, mbr_mbr_gender_cd, mbr_mbr_idn, mbr_mbr_ins_type, mbr_mbr_isreadonly, mbr_mbr_last_name, mbr_mbr_middle_name, mbr_mbr_name, mbr_mbr_status_idn, mbr_mpi_id, mbr_preferred_am_pm, mbr_preferred_time, mbr_prv_innetwork, mbr_rep_addr_idn, mbr_rep_name, mbr_rp_mbr_id, mbr_same_mbr_ins, mbr_special_needs_cd, mbr_timezone, mbr_upd_dt, mbr_user_idn, mbr_wgt, mbr_work_status_idn FROM (SELECT /*+ FIRST_ROWS(1) */ mbr_comment_idn, mbr_crt_dt, mbr_data_source, mbr_dol_bl_rmo_ind, mbr_dxcg_ctl_member, mbr_employment_start_dt, mbr_employment_term_dt, mbr_entity_active, mbr_ethnicity_idn, mbr_general_health_status_code, mbr_hand_dominant_code, mbr_hgt_feet, mbr_hgt_inches, mbr_highest_edu_level, mbr_insd_addr_idn, mbr_insd_alt_id, mbr_insd_name, mbr_insd_ssn_tin, mbr_is_smoker, mbr_is_vip, mbr_lmbr_first_name, mbr_lmbr_last_name, mbr_marital_status_cd, mbr_mbr_birth_dt, mbr_mbr_death_dt, mbr_mbr_expired, mbr_mbr_first_name, mbr_mbr_gender_cd, mbr_mbr_idn, mbr_mbr_ins_type, mbr_mbr_isreadonly, mbr_mbr_last_name, mbr_mbr_middle_name, mbr_mbr_name, mbr_mbr_status_idn, mbr_mpi_id, mbr_preferred_am_pm, mbr_preferred_time, mbr_prv_innetwork, mbr_rep_addr_idn, mbr_rep_name, mbr_rp_mbr_id, mbr_same_mbr_ins, mbr_special_needs_cd, mbr_timezone, mbr_upd_dt, mbr_user_idn, mbr_wgt, mbr_work_status_idn, ROWNUM AS ora_rn FROM (SELECT mbr.comment_idn AS mbr_comment_idn, mbr.crt_dt AS mbr_crt_dt, mbr.data_source AS mbr_data_source, mbr.dol_bl_rmo_ind AS mbr_dol_bl_rmo_ind, mbr.dxcg_ctl_member AS mbr_dxcg_ctl_member, mbr.employment_start_dt AS mbr_employment_start_dt, mbr.employment_term_dt AS mbr_employment_term_dt, mbr.entity_active AS mbr_entity_active, mbr.ethnicity_idn AS mbr_ethnicity_idn, mbr.general_health_status_code AS mbr_general_health_status_code, mbr.hand_dominant_code AS mbr_hand_dominant_code, mbr.hgt_feet AS mbr_hgt_feet, mbr.hgt_inches AS mbr_hgt_inches, mbr.highest_edu_level AS mbr_highest_edu_level, mbr.insd_addr_idn AS mbr_insd_addr_idn, mbr.insd_alt_id AS mbr_insd_alt_id, mbr.insd_name AS mbr_insd_name, mbr.insd_ssn_tin AS mbr_insd_ssn_tin, mbr.is_smoker AS mbr_is_smoker, mbr.is_vip AS mbr_is_vip, mbr.lmbr_first_name AS mbr_lmbr_first_name, mbr.lmbr_last_name AS mbr_lmbr_last_name, mbr.marital_status_cd AS mbr_marital_status_cd, mbr.mbr_birth_dt AS mbr_mbr_birth_dt, mbr.mbr_death_dt AS mbr_mbr_death_dt, mbr.mbr_expired AS mbr_mbr_expired, mbr.mbr_first_name AS mbr_mbr_first_name, mbr.mbr_gender_cd AS mbr_mbr_gender_cd, mbr.mbr_idn AS mbr_mbr_idn, mbr.mbr_ins_type AS mbr_mbr_ins_type, mbr.mbr_isreadonly AS mbr_mbr_isreadonly, mbr.mbr_last_name AS mbr_mbr_last_name, mbr.mbr_middle_name AS mbr_mbr_middle_name, mbr.mbr_name AS mbr_mbr_name, mbr.mbr_status_idn AS mbr_mbr_status_idn, mbr.mpi_id AS mbr_mpi_id, mbr.preferred_am_pm AS mbr_preferred_am_pm, mbr.preferred_time AS mbr_preferred_time, mbr.prv_innetwork AS mbr_prv_innetwork, mbr.rep_addr_idn AS mbr_rep_addr_idn, mbr.rep_name AS mbr_rep_name, mbr.rp_mbr_id AS mbr_rp_mbr_id, mbr.same_mbr_ins AS mbr_same_mbr_ins, mbr.special_needs_cd AS mbr_special_needs_cd, mbr.timezone AS mbr_timezone, mbr.upd_dt AS mbr_upd_dt, mbr.user_idn AS mbr_user_idn, mbr.wgt AS mbr_wgt, mbr.work_status_idn AS mbr_work_status_idn FROM mbr JOIN mbr_identfn ON mbr.mbr_idn = mbr_identfn.mbr_idn WHERE mbr_identfn.mbr_idn = mbr.mbr_idn AND mbr_identfn.identfd_type = :identfd_type_1 AND mbr_identfn.identfd_number = :identfd_number_1 AND mbr_identfn.entity_active = :entity_active_1) WHERE ROWNUM <= :ROWNUM_1) WHERE ora_rn > :ora_rn_1 call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 9936 0.46 0.49 0 0 0 0 Execute 9936 0.60 0.59 0 0 0 0 Fetch 9936 329.87 404.00 0 136966922 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 29808 330.94 405.09 0 136966922 0 0 Misses in library cache during parse: 0 Optimizer mode: FIRST_ROWS Parsing user id: 36 (JIVA_DEV) Rows Row Source Operation ------- --------------------------------------------------- 0 VIEW (cr=102 pr=0 pw=0 time=2180 us) 0 COUNT STOPKEY (cr=102 pr=0 pw=0 time=2163 us) 0 NESTED LOOPS (cr=102 pr=0 pw=0 time=2152 us) 0 INDEX SKIP SCAN IDX_MBR_IDENTFN (cr=102 pr=0 pw=0 time=2140 us)(object id 341053) 0 TABLE ACCESS BY INDEX ROWID MBR (cr=0 pr=0 pw=0 time=0 us) 0 INDEX UNIQUE SCAN PK_CLAIMANT (cr=0 pr=0 pw=0 time=0 us)(object id 334044) Rows Execution Plan ------- --------------------------------------------------- 0 SELECT STATEMENT MODE: HINT: FIRST_ROWS 0 VIEW 0 COUNT (STOPKEY) 0 NESTED LOOPS 0 INDEX MODE: ANALYZED (SKIP SCAN) OF 'IDX_MBR_IDENTFN' (INDEX (UNIQUE)) 0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'MBR' (TABLE) 0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'PK_CLAIMANT' (INDEX (UNIQUE)) ******************************************************************************** Based on my reading of Oracle's documentation of skip scans, a skip scan is most useful when the first column of an index has a low number of unique values. The thing is that the first index of this column is a unique primary key. So am I correct in assuming that a skip scan is the wrong thing to do here? Also, what kind of scan should it be doing? Should I do some more hinting for this query? EDIT: I should also point out that the query's where clause uses the columns in IDX_MBR_IDENTFN and no columns other than what's in that index. So as far as I can tell, I'm not skipping any columns.

    Read the article

  • error working with wsdl files in visual studio 2008

    - by deostroll
    Hi. I got a wsdl file in email. At first I didn't know how to use it. I've simply saved the file to my disk. Opened visual studio...added a service reference...provided path to file, and service was discovered. I opened the object browser to see the types and methods that got imported. I figure anything that ends with the name 'Client' is a good place to start using the web service. I've tried using a simple method to get data but it has run into and expception. Need help in resolving it. System.InvalidOperationException was unhandled Message="The XML element 'ListsRequest' from namespace 'http://www.asd.org/MGMMIRAGE.MDM.WS/Customer' references a method and a type. Change the method's message name using WebMethodAttribute or change the type's root element using the XmlRootAttribute." Source="System.Xml" StackTrace: at System.Xml.Serialization.XmlReflectionImporter.ReconcileAccessor(Accessor accessor, NameTable accessors) at System.Xml.Serialization.XmlReflectionImporter.ImportMembersMapping(String elementName, String ns, XmlReflectionMember[] members, Boolean hasWrapperElement, Boolean rpc, Boolean openModel, XmlMappingAccess access) at System.Xml.Serialization.XmlReflectionImporter.ImportMembersMapping(String elementName, String ns, XmlReflectionMember[] members, Boolean hasWrapperElement, Boolean rpc, Boolean openModel) at System.Xml.Serialization.XmlReflectionImporter.ImportMembersMapping(String elementName, String ns, XmlReflectionMember[] members, Boolean hasWrapperElement, Boolean rpc) at System.ServiceModel.Description.XmlSerializerOperationBehavior.Reflector.XmlSerializerImporter.ImportMembersMapping(XmlName elementName, String ns, XmlReflectionMember[] members, Boolean hasWrapperElement, Boolean rpc, Boolean isEncoded, String mappingKey) at System.ServiceModel.Description.XmlSerializerOperationBehavior.Reflector.OperationReflector.ImportMembersMapping(String elementName, String ns, XmlReflectionMember[] members, Boolean hasWrapperElement, Boolean rpc, String mappingKey) at System.ServiceModel.Description.XmlSerializerOperationBehavior.Reflector.OperationReflector.LoadBodyMapping(MessageDescription message, String mappingKey, MessagePartDescriptionCollection& rpcEncodedTypedMessageBodyParts) at System.ServiceModel.Description.XmlSerializerOperationBehavior.Reflector.OperationReflector.CreateMessageInfo(MessageDescription message, String key) at System.ServiceModel.Description.XmlSerializerOperationBehavior.Reflector.OperationReflector.EnsureMessageInfos() at System.ServiceModel.Description.XmlSerializerOperationBehavior.Reflector.EnsureMessageInfos() at System.ServiceModel.Description.XmlSerializerOperationBehavior.Reflector.OperationReflector.get_Request() at System.ServiceModel.Description.XmlSerializerOperationBehavior.CreateFormatter() at System.ServiceModel.Description.XmlSerializerOperationBehavior.System.ServiceModel.Description.IOperationBehavior.ApplyClientBehavior(OperationDescription description, ClientOperation proxy) at System.ServiceModel.Description.DispatcherBuilder.BindOperations(ContractDescription contract, ClientRuntime proxy, DispatchRuntime dispatch) at System.ServiceModel.Description.DispatcherBuilder.ApplyClientBehavior(ServiceEndpoint serviceEndpoint, ClientRuntime clientRuntime) at System.ServiceModel.Description.DispatcherBuilder.BuildProxyBehavior(ServiceEndpoint serviceEndpoint, BindingParameterCollection& parameters) at System.ServiceModel.Channels.ServiceChannelFactory.BuildChannelFactory(ServiceEndpoint serviceEndpoint) at System.ServiceModel.ChannelFactory.CreateFactory() at System.ServiceModel.ChannelFactory.OnOpening() at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout) at System.ServiceModel.ChannelFactory.EnsureOpened() at System.ServiceModel.ChannelFactory`1.CreateChannel(EndpointAddress address, Uri via) at System.ServiceModel.ChannelFactory`1.CreateChannel() at System.ServiceModel.ClientBase`1.CreateChannel() at System.ServiceModel.ClientBase`1.CreateChannelInternal() at System.ServiceModel.ClientBase`1.get_Channel() at MDMWSDemo.MDMWebSrvc.MGMCustomerSoapPortTypeClient.MDMWSDemo.MDMWebSrvc.MGMCustomerSoapPortType.CountryCodeGet(CountryCodeGetRequest request) in C:\Documents and Settings\tbhagava01\My Documents\Visual Studio 2008\Projects\MDMWSDemo\MDMWSDemo\Service References\MDMWebSrvc\Reference.cs:line 2983 at MDMWSDemo.MDMWebSrvc.MGMCustomerSoapPortTypeClient.CountryCodeGet(String countryCode) in C:\Documents and Settings\tbhagava01\My Documents\Visual Studio 2008\Projects\MDMWSDemo\MDMWSDemo\Service References\MDMWebSrvc\Reference.cs:line 2989 at MDMWSDemo.Program.Main(String[] args) in C:\Documents and Settings\tbhagava01\My Documents\Visual Studio 2008\Projects\MDMWSDemo\MDMWSDemo\Program.cs:line 15 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException:

    Read the article

  • Changing html <-> ajax <-> php/mysql to threaded approach

    - by Saif Bechan
    I have an application that needs to be updated real-time. There are various counters and other information that have to come from the database and the system needs to be up to date for the user. My approach now is just a normal ajax request every second to get the new values from the database. There is a JavaScript which loops every second getting the values trough ajax. This works fine but I think its very inefficient. The problem There is an ajax script that loops every second requesting data from php # On the server it has to load the PHP interpeter The PHP file has to get the data and format it correctly # PHP has to make a connection with the mysql database Work with the database(reads,never writes) Format the data so it can be send Send the data back to the browser # Close the database connection, and close the php interpeter Last the browser has to read these values and update the various html parts Now with this approach it has to load the interpreter and make a db connection every second. I was thinking of a way to make this more efficient, and maybe use a threaded approach to this. Threaded aprouch Do a post to the PHP when you enter the page and keep the connection alive In PHP only load the interpreter once, and make a connection to the DB ones Every second send an ajax response to the javascript listener The javascript listener than just changes values as the response from php arrives. I think this approach will be a great optimization to the server load and overall performance. But I can spot some weak point in the system and i need some help with these. Problems with the approach PHP execution time limit I don't think PHP is designed for such a setup. I know there is a time limit on php script execution. I don't know if an everlasting loop in PHP will cause any serious cpu/memory problems. Sending ajax request without breaking I don't know if it is possible to have just one ajax post action and have open and accepting data. user exists the page What will happen when the user exists the page and the PHP script is still going. Will it go on forever. security issues so far i can't think of any security issues. Almost every setup you use have some security issues. Maybe there are some with this solution I do not know of. Open to other solution I really want to change the setup as it is now and move to a threaded approach or better. If someone has a better approach to tackle this I definitely want to hear that. Maybe the usage of some other scripts is better suited for having an ongoing runtime. I only know php and java so any suggestions are welcome and I am willing to dig trough. I know there are things like perl, python etcetera that are used for this type of threaded but i don't know which one is best suited. When using other script If the best way is to go with other type of script like perl,python etcetera I do have some critera. The script has to be accessible via ajax post If it accepts some kind of json encode/decode it would be nice The script has to be able to access the session file This is essential because I need to know if the user is logged in The script has to be able to easily talk to MySQL All comments are welcome, and I hope this question is helpful to other also. Cheers!

    Read the article

  • My Dijit DateTimeCombo widget doesn't send selected value on form submission

    - by david bessire
    i need to create a Dojo widget that lets users specify date & time. i found a sample implementation attached to an entry in the Dojo bug tracker. It looks nice and mostly works, but when i submit the form, the value sent by the client is not the user-selected value but the value sent from the server. What changes do i need to make to get the widget to submit the date & time value? Sample usage is to render a JSP with basic HTML tags (form & input), then dojo.addOnLoad a function which selects the basic elements by ID, adds dojoType attribute, and dojo.parser.parse()-es the page. Thanks in advance. The widget is implemented in two files. The application uses Dojo 1.3. File 1: DateTimeCombo.js dojo.provide("dojox.form.DateTimeCombo"); dojo.require("dojox.form._DateTimeCombo"); dojo.require("dijit.form._DateTimeTextBox"); dojo.declare( "dojox.form.DateTimeCombo", dijit.form._DateTimeTextBox, { baseClass: "dojoxformDateTimeCombo dijitTextBox", popupClass: "dojox.form._DateTimeCombo", pickerPostOpen: "pickerPostOpen_fn", _selector: 'date', constructor: function (argv) {}, postMixInProperties: function() { dojo.mixin(this.constraints, { /* datePattern: 'MM/dd/yyyy HH:mm:ss', timePattern: 'HH:mm:ss', */ datePattern: 'MM/dd/yyyy HH:mm', timePattern: 'HH:mm', clickableIncrement:'T00:15:00', visibleIncrement:'T00:15:00', visibleRange:'T01:00:00' }); this.inherited(arguments); }, _open: function () { this.inherited(arguments); if (this._picker!==null && (this.pickerPostOpen!==null && this.pickerPostOpen!=="")) { if (this._picker.pickerPostOpen_fn!==null) { this._picker.pickerPostOpen_fn(this); } } } } ); File 2: _DateTimeCombo.js dojo.provide("dojox.form._DateTimeCombo"); dojo.require("dojo.date.stamp"); dojo.require("dijit._Widget"); dojo.require("dijit._Templated"); dojo.require("dijit._Calendar"); dojo.require("dijit.form.TimeTextBox"); dojo.require("dijit.form.Button"); dojo.declare("dojox.form._DateTimeCombo", [dijit._Widget, dijit._Templated], { // invoked only if time picker is empty defaultTime: function () { var res= new Date(); res.setHours(0,0,0); return res; }, // id of this table below is the same as this.id templateString: " <table class=\"dojoxDateTimeCombo\" waiRole=\"presentation\">\ <tr class=\"dojoxTDComboCalendarContainer\">\ <td>\ <center><input dojoAttachPoint=\"calendar\" dojoType=\"dijit._Calendar\"></input></center>\ </td>\ </tr>\ <tr class=\"dojoxTDComboTimeTextBoxContainer\">\ <td>\ <center><input dojoAttachPoint=\"timePicker\" dojoType=\"dijit.form.TimeTextBox\"></input></center>\ </td>\ </tr>\ <tr><td><center><button dojoAttachPoint=\"ctButton\" dojoType=\"dijit.form.Button\">Ok</button></center></td></tr>\ </table>\ ", widgetsInTemplate: true, constructor: function(arg) {}, postMixInProperties: function() { this.inherited(arguments); }, postCreate: function() { this.inherited(arguments); this.connect(this.ctButton, "onClick", "_onValueSelected"); }, // initialize pickers to calendar value pickerPostOpen_fn: function (parent_inst) { var parent_value = parent_inst.attr('value'); if (parent_value !== null) { this.setValue(parent_value); } }, // expects a valid date object setValue: function(value) { if (value!==null) { this.calendar.attr('value', value); this.timePicker.attr('value', value); } }, // return a Date constructed date in calendar & time in time picker. getValue: function() { var value = this.calendar.attr('value'); var result=value; if (this.timePicker.value !== null) { if ((this.timePicker.value instanceof Date) === true) { result.setHours(this.timePicker.value.getHours(), this.timePicker.value.getMinutes(), this.timePicker.value.getSeconds()); return result; } } else { var defTime=this.defaultTime(); result.setHours(defTime.getHours(), defTime.getMinutes(), defTime.getSeconds()); return result; } }, _onValueSelected: function() { var value = this.getValue(); this.onValueSelected(value); }, onValueSelected: function(value) {} });

    Read the article

  • Unable to verify body hash for DKIM

    - by Joshua
    I'm writing a C# DKIM validator and have come across a problem that I cannot solve. Right now I am working on calculating the body hash, as described in Section 3.7 Computing the Message Hashes. I am working with emails that I have dumped using a modified version of EdgeTransportAsyncLogging sample in the Exchange 2010 Transport Agent SDK. Instead of converting the emails when saving, it just opens a file based on the MessageID and dumps the raw data to disk. I am able to successfully compute the body hash of the sample email provided in Section A.2 using the following code: SHA256Managed hasher = new SHA256Managed(); ASCIIEncoding asciiEncoding = new ASCIIEncoding(); string rawFullMessage = File.ReadAllText(@"C:\Repositories\Sample-A.2.txt"); string headerDelimiter = "\r\n\r\n"; int headerEnd = rawFullMessage.IndexOf(headerDelimiter); string header = rawFullMessage.Substring(0, headerEnd); string body = rawFullMessage.Substring(headerEnd + headerDelimiter.Length); byte[] bodyBytes = asciiEncoding.GetBytes(body); byte[] bodyHash = hasher.ComputeHash(bodyBytes); string bodyBase64 = Convert.ToBase64String(bodyHash); string expectedBase64 = "2jUSOH9NhtVGCQWNr9BrIAPreKQjO6Sn7XIkfJVOzv8="; Console.WriteLine("Expected hash: {1}{0}Computed hash: {2}{0}Are equal: {3}", Environment.NewLine, expectedBase64, bodyBase64, expectedBase64 == bodyBase64); The output from the above code is: Expected hash: 2jUSOH9NhtVGCQWNr9BrIAPreKQjO6Sn7XIkfJVOzv8= Computed hash: 2jUSOH9NhtVGCQWNr9BrIAPreKQjO6Sn7XIkfJVOzv8= Are equal: True Now, most emails come across with the c=relaxed/relaxed setting, which requires you to do some work on the body and header before hashing and verifying. And while I was working on it (failing to get it to work) I finally came across a message with c=simple/simple which means that you process the whole body as is minus any empty CRLF at the end of the body. (Really, the rules for Body Canonicalization are quite ... simple.) Here is the real DKIM email with a signature using the simple algorithm (with only unneeded headers cleaned up). Now, using the above code and updating the expectedBase64 hash I get the following results: Expected hash: VnGg12/s7xH3BraeN5LiiN+I2Ul/db5/jZYYgt4wEIw= Computed hash: ISNNtgnFZxmW6iuey/3Qql5u6nflKPTke4sMXWMxNUw= Are equal: False The expected hash is the value from the bh= field of the DKIM-Signature header. Now, the file used in the second test is a direct raw output from the Exchange 2010 Transport Agent. If so inclined, you can view the modified EdgeTransportLogging.txt. At this point, no matter how I modify the second email, changing the start position or number of CRLF at the end of the file I cannot get the files to match. What worries me is that I have been unable to validate any body hash so far (simple or relaxed) and that it may not be feasible to process DKIM through Exchange 2010.

    Read the article

  • R Install/Update on Mac OS X (Snow Leopard): where does R put files during install/config?

    - by doug
    In sum, there's a stray preference-like file or two (probably just one) that i can't find. Here's the whole story: I recently attempted to update my R install from 2.10 to 2.11. As i have done before, i installed from source. I know that all of the dependencies are correctly installed and made available to R, because my prior install worked fine. When i upgraded to 2.11, i am unable to install packages (no exception thrown, it just doesn't appear to complete the install and is unresponsive so i have to quit + restart R. Given i install from source, there are any number of points in the process that i could have messed up. What i need to do now is "start over" which requires that i clear out my my prior install. I am having trouble doing that. There is still at least one preference file or something like that i can't find and one of these is causing the problem, so i need to find it and terminate it with extreme prejudice before i do a fresh install. Although i set a number of flags during the install, i have never opted out of the default install locations during the config step. There has to be one or more preference files still in my file structure (and that's also accessible to the new install of R) because after i follow all of the steps below, then do a fresh install, when i start R for the first time, some of my preferences have persisted (e.g., quartz settings, GUI background color, editor selection, etc.). Again, the problem is that i just cannot locate those files. Finally, the problem can't be that during my last install from source, i inadvertently caused a preference file to be sent to an off-spec location--because again, a fresh R install (whether from source or from the OS X binaries) is finding those files Here's what i've done prior to attempting a clean install of R: Removed files from these locations: ~/.RData ~/.RHistory /Applications/R64.app /Applications/R.app /Library/Frameworks/R.framework (i also removed all symlinks from these) Cleared all RAM and disk caches, in particular the directory where i know R caches: ~/Library/Caches/R* (in fact i've cleared this entire directory) Checked for all 'hidden' files in the OS X directories where login/startup files are often placed: /etc/ ~/ In addition, i've checked R-help, and i've also read through the relevant portions of 'R Installation and Administration'--no luck. I've also searched searched my file structure using the various bash utilities, which nearly always solves problems of this sort quite easily, but in this case obviously searching by name or even pattern is more problematic.

    Read the article

  • Can the STREAM and GUPS (single CPU) benchmark use non-local memory in NUMA machine

    - by osgx
    Hello I want to run some tests from HPCC, STREAM and GUPS. They will test memory bandwidth, latency, and throughput (in term of random accesses). Can I start Single CPU test STREAM or Single CPU GUPS on NUMA node with memory interleaving enabled? (Is it allowed by the rules of HPCC - High Performance Computing Challenge?) Usage of non-local memory can increase GUPS results, because it will increase 2- or 4- fold the number of memory banks, available for random accesses. (GUPS typically limited by nonideal memory-subsystem and by slow memory bank opening/closing. With more banks it can do update to one bank, while the other banks are opening/closing.) Thanks. UPDATE: (you may nor reorder the memory accesses that the program makes). But can compiler reorder loops nesting? E.g. hpcc/RandomAccess.c /* Perform updates to main table. The scalar equivalent is: * * u64Int ran; * ran = 1; * for (i=0; i<NUPDATE; i++) { * ran = (ran << 1) ^ (((s64Int) ran < 0) ? POLY : 0); * table[ran & (TableSize-1)] ^= stable[ran >> (64-LSTSIZE)]; * } */ for (j=0; j<128; j++) ran[j] = starts ((NUPDATE/128) * j); for (i=0; i<NUPDATE/128; i++) { /* #pragma ivdep */ for (j=0; j<128; j++) { ran[j] = (ran[j] << 1) ^ ((s64Int) ran[j] < 0 ? POLY : 0); Table[ran[j] & (TableSize-1)] ^= stable[ran[j] >> (64-LSTSIZE)]; } } The main loop here is for (i=0; i<NUPDATE/128; i++) { and the nested loop is for (j=0; j<128; j++) {. Using 'loop interchange' optimization, compiler can convert this code to for (j=0; j<128; j++) { for (i=0; i<NUPDATE/128; i++) { ran[j] = (ran[j] << 1) ^ ((s64Int) ran[j] < 0 ? POLY : 0); Table[ran[j] & (TableSize-1)] ^= stable[ran[j] >> (64-LSTSIZE)]; } } It can be done because this loop nest is perfect loop nest. Is such optimization prohibited by rules of HPCC?

    Read the article

  • manuplating matrix operation(transpose, negation, addition, and mutipication) using functions in c

    - by user292489
    i was trying to manuplate matrices in my input file using functions. my input file is, A 3 3 1 2 3 4 5 6 7 8 9 B 3 3 1 0 0 0 1 0 0 0 1 C 2 3 3 5 8 -1 -2 -3 D 3 5 0 0 0 1 0 1 0 1 0 1 0 1 0 0 1 E 1 1 10 F 3 10 1 0 2 0 3 0 4 0 5 0 0 2 3 -1 -3 -4 -3 8 3 7 0 0 0 4 6 5 8 2 -1 10 i am having trouble in impementing the funcitons that i declared. i assumed my program will perform those operations: transpose, negate, add, and mutiply matices according to the users choise: /* once this program is compliled and excuted, it will perform the basic matrix operations: negation, transpose,a\ ddition, and multiplication. */ #include <stdio.h> #include <stdlib.h> #define MAX 10 int readmatrix(FILE *input, char martixname[6],int , mat[10][10], int i, int j); void printmatrix(char matrixname[6], int mat[10][10], int i, int j); void Negate(char matrixname[6], int mat[10][10], int i, int j); void add(char matrixname[6], int mat[10][10],int i, int k); void multiply(char matrixname[], int mat[][10], char A[], int i, int k); void transpose (char matrixname[], int mat[][10], char A[], int); void printT(int mat[][10], int); int selctoption(); char selectmatrix(); int main(int argc, char *argv[]) { char matrixtype[6]; int mat[][10]; FILE *filein; int size; int optionop; int matrixop; int option; if (argc != 2) { printf("Usage: excutable input.\n"); exit (0); } filein = fopen(argv[1], "r"); if (!filein) { printf("ERROR: input file not found.\n"); exit (0); } size = readmatrix (filein, matrixtype); printmatrix(matrix[][10], size); option = selectoption(); matrixtype = selectmatrix(); //printf("You have: %5.2f ", deposit); optionop = readmatrix(option, matrix[][10], size); if (choiceop == 6) { printf("Thanks for using the matrix operation program.\n"); exit(0); } printf("Please select from the following matrix operations:\n") printf("\t1. Print matrix\n"); printf("\t2. Negate matrix\n"); printf("\t3. Transpose matrix\n"); printf("\t4. Add matrices\n"); printf("\t5. Multiply matrices\n"); printf("\t6. Quit\n"); fclose(filein); return 0; } do { printf("Please select option(1-%d):", optionop); scanf("%d", &matrixop); }while(matrixop <= 0 || matrixop > optionop); void readmatrix (FILE *in, int mat[][10], char A[], int i, int j) { int i=0,j = 0; while (fscanf(in, "%d", &mat[i][j]) != EOF) return 0; } // i would apprtaite anyones feedback. //thank you!

    Read the article

  • Start a short video when an incoming call is detected, first case using the emulator.

    - by Emanuel
    I want to be able to start a short video on an incoming phone call. The video will loop until the call is answered. I've loaded the video onto the emulator sdcard then created the appropriate level avd with a path to the sdcard.iso file on disk. Since I'm running on a Mac OS x snow leopard I am able to confirm the contents of the sdcard. All testing has be done on the Android emulator. In a separate project TestVideo I created an activity that just launches the video from the sdcard. That works as expected. Then I created another project TestIncoming that creates an activity that creates a PhoneStateListener that overrides the onCallStateChanged(int state, String incomingNumber) method. In the onCallStateChanged() method I check if state == TelephonyManager.CALL_STATE_RINGING. If true I create an Intent that starts the video. I'm actually using the code from the TestVideo project above. Here is the code snippet. PhoneStateListener callStateListener = new PhoneStateListener() { @Override public void onCallStateChanged(int state, String incomingNumber) { if(state == TelelphonyManager.CALL_STATE_RINGING) { Intent launchVideo = new Intent(MyActivity.this, LaunchVideo.class); startActivity(launchVideo); } } }; The PhoneStateListener is added to the TelephonyManager.listen() method like so, telephonyManager.listen(callStateListener, PhoneStateListener.LISTEN_CALL_STATE); Here is the part I'm unclear on, the manifest. What I've tried is the following: <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.incomingdemo" android:versionCode="1" android:versionName="1.0"> <application android:icon="@drawable/icon" android:label="@string/app_name"> <activity android:name=".IncomingVideoDemo" android:label="@string/app_name"> <intent-filter> <action android:name="android.intent.action.ANSWER" /> <category android:name="android.intent.category.DEFAULT" /> </intent-filter> </activity> <activity android:name=".LaunchVideo" android:label="LaunchVideo"> </activity> </application> <uses-sdk android:minSdkVersion="2" /> <uses-permission android:name="android.permission.READ_PHONE_STATE"/> </manifest> I've run the debugger after setting breakpoints in the IncomingVideoDemo activity where the PhoneStateListener is created and none of the breakpoints are hit. Any insights into solving this problem is greatly appreciated. Thanks.

    Read the article

  • Speed Problem with Wireless Connectivity on Cisco 877w

    - by Carl Crawley
    Having a bit of a weird one with my local LAN setup. I recently installed a Cisco 877W router on my DSL2+ connection and all is working really well.. Upgraded the IOS to 12.4 and my wired clients are streaming connectivity superfast at 1.3mb/s. However, there seems to be an issue with my wireless clients - I can't seem to stream any data across the local wireless connection (LAN) and using the Internet, whilst responsive enough isn't really comparable with the wired connection speed. For example, all devices are connected to an 8 Port Gb switch on FE0 from the Router with a NAS disk and on my wired clients, I can transfer/stream etc absolutely fine - however, transferring a local 700Mb file on my local LAN estimates 7-8 hours to transfer :( The Wireless config is as follows : interface Dot11Radio0 description WIRELESS INTERFACE no ip address ! encryption mode ciphers tkip ! ssid [MySSID] ! speed basic-1.0 basic-2.0 basic-5.5 6.0 9.0 basic-11.0 channel 2462 station-role root rts threshold 2312 world-mode dot11d country GB indoor bridge-group 1 bridge-group 1 subscriber-loop-control bridge-group 1 spanning-disabled bridge-group 1 block-unknown-source no bridge-group 1 source-learning no bridge-group 1 unicast-flooding All devices are connected to the Gb Switch which is connected to FE0 with the following: Hardware is Fast Ethernet, address is 0021.a03e.6519 (bia 0021.a03e.6519) Description: Uplink to Switch MTU 1500 bytes, BW 100000 Kbit/sec, DLY 100 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation ARPA, loopback not set Keepalive set (10 sec) Full-duplex, 100Mb/s ARP type: ARPA, ARP Timeout 04:00:00 Last input never, output never, output hang never Last clearing of "show interface" counters never Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: fifo Output queue: 0/40 (size/max) 5 minute input rate 14000 bits/sec, 19 packets/sec 5 minute output rate 167000 bits/sec, 23 packets/sec 177365 packets input, 52089562 bytes, 0 no buffer Received 919 broadcasts, 0 runts, 0 giants, 0 throttles 260 input errors, 260 CRC, 0 frame, 0 overrun, 0 ignored 0 input packets with dribble condition detected 156673 packets output, 106218222 bytes, 0 underruns 0 output errors, 0 collisions, 2 interface resets 0 babbles, 0 late collision, 0 deferred 0 lost carrier, 0 no carrier 0 output buffer failures, 0 output buffers swapped out Not sure why I'm having problems on the wireless and I've reached the end of my Cisco knowledge... Thanks for any pointers! Carl

    Read the article

  • mysql index optimization for a table with multiple indexes that index some of the same columns

    - by Sean
    I have a table that stores some basic data about visitor sessions on third party web sites. This is its structure: id, site_id, unixtime, unixtime_last, ip_address, uid There are four indexes: id, site_id/unixtime, site_id/ip_address, and site_id/uid There are many different types of ways that we query this table, and all of them are specific to the site_id. The index with unixtime is used to display the list of visitors for a given date or time range. The other two are used to find all visits from an IP address or a "uid" (a unique cookie value created for each visitor), as well as determining if this is a new visitor or a returning visitor. Obviously storing site_id inside 3 indexes is inefficient for both write speed and storage, but I see no way around it, since I need to be able to quickly query this data for a given specific site_id. Any ideas on making this more efficient? I don't really understand B-trees besides some very basic stuff, but it's more efficient to have the left-most column of an index be the one with the least variance - correct? Because I considered having the site_id being the second column of the index for both ip_address and uid but I think that would make the index less efficient since the IP and UID are going to vary more than the site ID will, because we only have about 8000 unique sites per database server, but millions of unique visitors across all ~8000 sites on a daily basis. I've also considered removing site_id from the IP and UID indexes completely, since the chances of the same visitor going to multiple sites that share the same database server are quite small, but in cases where this does happen, I fear it could be quite slow to determine if this is a new visitor to this site_id or not. The query would be something like: select id from sessions where uid = 'value' and site_id = 123 limit 1 ... so if this visitor had visited this site before, it would only need to find one row with this site_id before it stopped. This wouldn't be super fast necessarily, but acceptably fast. But say we have a site that gets 500,000 visitors a day, and a particular visitor loves this site and goes there 10 times a day. Now they happen to hit another site on the same database server for the first time. The above query could take quite a long time to search through all of the potentially thousands of rows for this UID, scattered all over the disk, since it wouldn't be finding one for this site ID. Any insight on making this as efficient as possible would be appreciated :) Update - this is a MyISAM table with MySQL 5.0. My concerns are both with performance as well as storage space. This table is both read and write heavy. If I had to choose between performance and storage, my biggest concern is performance - but both are important. We use memcached heavily in all areas of our service, but that's not an excuse to not care about the database design. I want the database to be as efficient as possible.

    Read the article

  • Overriding Object.Equals() instance method in C#; now Code Analysis / FxCop warning CA2218: "should

    - by Chris W. Rea
    I've got a complex class in my C# project on which I want to be able to do equality tests. It is not a trivial class; it contains a variety of scalar properties as well as references to other objects and collections (e.g. IDictionary). For what it's worth, my class is sealed. To enable a performance optimization elsewhere in my system (an optimization that avoids a costly network round-trip), I need to be able to compare instances of these objects to each other for equality – other than the built-in reference equality – and so I'm overriding the Object.Equals() instance method. However, now that I've done that, Visual Studio 2008's Code Analysis a.k.a. FxCop, which I keep enabled by default, is raising the following warning: warning : CA2218 : Microsoft.Usage : Since 'MySuperDuperClass' redefines Equals, it should also redefine GetHashCode. I think I understand the rationale for this warning: If I am going to be using such objects as the key in a collection, the hash code is important. i.e. see this question. However, I am not going to be using these objects as the key in a collection. Ever. Feeling justified to suppress the warning, I looked up code CA2218 in the MSDN documentation to get the full name of the warning so I could apply a SuppressMessage attribute to my class as follows: [SuppressMessage("Microsoft.Naming", "CA2218:OverrideGetHashCodeOnOverridingEquals", Justification="This class is not to be used as key in a hashtable.")] However, while reading further, I noticed the following: How to Fix Violations To fix a violation of this rule, provide an implementation of GetHashCode. For a pair of objects of the same type, you must ensure that the implementation returns the same value if your implementation of Equals returns true for the pair. When to Suppress Warnings ----- Do not suppress a warning from this rule. [arrow & emphasis mine] So, I'd like to know: Why shouldn't I suppress this warning as I was planning to? Doesn't my case warrant suppression? I don't want to code up an implementation of GetHashCode() for this object that will never get called, since my object will never be the key in a collection. If I wanted to be pedantic, instead of suppressing, would it be more reasonable for me to override GetHashCode() with an implementation that throws a NotImplementedException? Update: I just looked this subject up again in Bill Wagner's good book Effective C#, and he states in "Item 10: Understand the Pitfalls of GetHashCode()": If you're defining a type that won't ever be used as the key in a container, this won't matter. Types that represent window controls, web page controls, or database connections are unlikely to be used as keys in a collection. In those cases, do nothing. All reference types will have a hash code that is correct, even if it is very inefficient. [...] In most types that you create, the best approach is to avoid the existence of GetHashCode() entirely. ... that's where I originally got this idea that I need not be concerned about GetHashCode() always.

    Read the article

  • bind() fails with windows socket error 10038

    - by herrturtur
    I'm trying to write a simple program that will receive a string of max 20 characters and print that string to the screen. The code compiles, but I get a bind() failed: 10038. After looking up the error number on msdn (socket operation on nonsocket), I changed some code from int sock; to SOCKET sock which shouldn't make a difference, but one never knows. Here's the code: #include <iostream> #include <winsock2.h> #include <cstdlib> using namespace std; const int MAXPENDING = 5; const int MAX_LENGTH = 20; void DieWithError(char *errorMessage); int main(int argc, char **argv) { if(argc!=2){ cerr << "Usage: " << argv[0] << " <Port>" << endl; exit(1); } // start winsock2 library WSAData wsaData; if(WSAStartup(MAKEWORD(2,0), &wsaData)!=0){ cerr << "WSAStartup() failed" << endl; exit(1); } // create socket for incoming connections SOCKET servSock; if(servSock=socket(AF_INET, SOCK_STREAM, IPPROTO_TCP)==INVALID_SOCKET) DieWithError("socket() failed"); // construct local address structure struct sockaddr_in servAddr; memset(&servAddr, 0, sizeof(servAddr)); servAddr.sin_family = AF_INET; servAddr.sin_addr.s_addr = INADDR_ANY; servAddr.sin_port = htons(atoi(argv[1])); // bind to the local address int servAddrLen = sizeof(servAddr); if(bind(servSock, (SOCKADDR*)&servAddr, servAddrLen)==SOCKET_ERROR) DieWithError("bind() failed"); // mark the socket to listen for incoming connections if(listen(servSock, MAXPENDING)<0) DieWithError("listen() failed"); // accept incoming connections int clientSock; struct sockaddr_in clientAddr; char buffer[MAX_LENGTH]; int recvMsgSize; int clientAddrLen = sizeof(clientAddr); for(;;){ // wait for a client to connect if((clientSock=accept(servSock, (sockaddr*)&clientAddr, &clientAddrLen))<0) DieWithError("accept() failed"); // clientSock is connected to a client // BEGIN Handle client cout << "Handling client " << inet_ntoa(clientAddr.sin_addr) << endl; if((recvMsgSize = recv(clientSock, buffer, MAX_LENGTH, 0)) <0) DieWithError("recv() failed"); cout << "Word in the tubes: " << buffer << endl; closesocket(clientSock); // END Handle client } } void DieWithError(char *errorMessage) { fprintf(stderr, "%s: %d\n", errorMessage, WSAGetLastError()); exit(1); }

    Read the article

  • Using PHP's IMAP library triggers Kaspersky's Antivirus

    - by TMG
    Hello, I just started today working with PHP's IMAP library, and while imap_fetchbody or imap_body are called, it is triggering my Kaspersky antivirus. The viruses are Trojan.Win32.Agent.dmyq and Trojan.Win32.FraudPack.aoda. I am running this off a local development machine with XAMPP and Kaspersky AV. Now, I am sure there are viruses there since there is spam in the box (who doesn't need a some viagra or vicodin these days?). And I know that since the raw body includes attachments and different mime-types, bad stuff can be in the body. So my question is: are there any risks using these libraries? I am assuming that the IMAP functions are retrieving the body, caching it to disk/memory and the AV scanning it sees the data. Is that correct? Are there any known security concerns using this library (I couldn't find any)? Does it clean up cached message parts perfectly or might viral files be sitting somewhere? Is there a better way to get plain text out of the body than this? Right now I am using the following code (credit to Kevin Steffer): function get_mime_type(&$structure) { $primary_mime_type = array("TEXT", "MULTIPART","MESSAGE", "APPLICATION", "AUDIO","IMAGE", "VIDEO", "OTHER"); if($structure->subtype) { return $primary_mime_type[(int) $structure->type] . '/' .$structure->subtype; } return "TEXT/PLAIN"; } function get_part($stream, $msg_number, $mime_type, $structure = false, $part_number = false) { if(!$structure) { $structure = imap_fetchstructure($stream, $msg_number); } if($structure) { if($mime_type == get_mime_type($structure)) { if(!$part_number) { $part_number = "1"; } $text = imap_fetchbody($stream, $msg_number, $part_number); if($structure->encoding == 3) { return imap_base64($text); } else if($structure->encoding == 4) { return imap_qprint($text); } else { return $text; } } if($structure->type == 1) /* multipart */ { while(list($index, $sub_structure) = each($structure->parts)) { if($part_number) { $prefix = $part_number . '.'; } $data = get_part($stream, $msg_number, $mime_type, $sub_structure,$prefix . ($index + 1)); if($data) { return $data; } } // END OF WHILE } // END OF MULTIPART } // END OF STRUTURE return false; } // END OF FUNCTION $connection = imap_open($server, $login, $password); $count = imap_num_msg($connection); for($i = 1; $i <= $count; $i++) { $header = imap_headerinfo($connection, $i); $from = $header->fromaddress; $to = $header->toaddress; $subject = $header->subject; $date = $header->date; $body = get_part($connection, $i, "TEXT/PLAIN"); }

    Read the article

  • Logging raw HTTP request/response in ASP.NET MVC & IIS7

    - by Greg Beech
    I'm writing a web service (using ASP.NET MVC) and for support purposes we'd like to be able to log the requests and response in as close as possible to the raw, on-the-wire format (i.e including HTTP method, path, all headers, and the body) into a database. What I'm not sure of is how to get hold of this data in the least 'mangled' way. I can re-constitute what I believe the request looks like by inspecting all the properties of the HttpRequest object and building a string from them (and similarly for the response) but I'd really like to get hold of the actual request/response data that's sent on the wire. I'm happy to use any interception mechanism such as filters, modules, etc. and the solution can be specific to IIS7. However, I'd prefer to keep it in managed code only. Any recommendations? Edit: I note that HttpRequest has a SaveAs method which can save the request to disk but this reconstructs the request from the internal state using a load of internal helper methods that cannot be accessed publicly (quite why this doesn't allow saving to a user-provided stream I don't know). So it's starting to look like I'll have to do my best to reconstruct the request/response text from the objects... groan. Edit 2: Please note that I said the whole request including method, path, headers etc. The current responses only look at the body streams which does not include this information. Edit 3: Does nobody read questions around here? Five answers so far and yet not one even hints at a way to get the whole raw on-the-wire request. Yes, I know I can capture the output streams and the headers and the URL and all that stuff from the request object. I already said that in the question, see: I can re-constitute what I believe the request looks like by inspecting all the properties of the HttpRequest object and building a string from them (and similarly for the response) but I'd really like to get hold of the actual request/response data that's sent on the wire. If you know the complete raw data (including headers, url, http method, etc.) simply cannot be retrieved then that would be useful to know. Similarly if you know how to get it all in the raw format (yes, I still mean including headers, url, http method, etc.) without having to reconstruct it, which is what I asked, then that would be very useful. But telling me that I can reconstruct it from the HttpRequest/HttpResponse objects is not useful. I know that. I already said it. Please note: Before anybody starts saying this is a bad idea, or will limit scalability, etc., we'll also be implementing throttling, sequential delivery, and anti-replay mechanisms in a distributed environment, so database logging is required anyway. I'm not looking for a discussion of whether this is a good idea, I'm looking for how it can be done.

    Read the article

  • Why does using ASP.NET OutputCache keep returning a 200 OK, not a 304 Not Modified?

    - by Pure.Krome
    Hi folks, i have a simple aspx page. Here's the top of it:- <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Foo.aspx.cs" Inherits="Foo" %> <%@ OutputCache Duration="3600" VaryByParam="none" Location="Any" %> Now, every time I hit the page in FireFox (either hit F5 or hit enter in the url bar) I keep getting a 200 OK response. Here's a sample reply from FireBug :- Request Headers:- GET /sitemap.xml HTTP/1.1 Host: localhost.foo.com.au User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-GB; rv:1.9.2) Gecko/20100115 Firefox/3.6 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-au,en-gb;q=0.7,en;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Cookie: <snipped> If-Modified-Since: Tue, 01 Jun 2010 07:35:17 GMT If-None-Match: "" Cache-Control: max-age=0 Response Headers:- HTTP/1.1 200 OK Cache-Control: public Content-Type: text/xml; charset=utf-8 Expires: Tue, 01 Jun 2010 08:35:17 GMT Last-Modified: Tue, 01 Jun 2010 07:35:17 GMT Etag: "" Server: Microsoft-IIS/7.5 X-Powered-By: UrlRewriter.NET 2.0.0 X-AspNet-Version: 4.0.30319 Date: Tue, 01 Jun 2010 07:35:20 GMT Content-Length: 775 Firebug Cache tab:- Last Modified Tue Jun 01 2010 17:35:20 GMT+1000 (AUS Eastern Standard Time) Last Fetched Tue Jun 01 2010 17:35:20 GMT+1000 (AUS Eastern Standard Time) Expires Tue Jun 01 2010 18:35:17 GMT+1000 (AUS Eastern Standard Time) Data Size 775 Fetch Count 105 Device disk Now, if i try it in Fiddler using the Request Builder (and no extra data) I also keep getting the same 200 OK reply. Request Headers:- GET http://localhost.foo.com.au/sitemap.xml HTTP/1.1 User-Agent: Fiddler Host: foo.com.au Response Headers:- HTTP/1.1 200 OK Cache-Control: public Content-Type: text/xml; charset=utf-8 Expires: Tue, 01 Jun 2010 07:58:00 GMT Last-Modified: Tue, 01 Jun 2010 06:58:00 GMT ETag: "" Server: Microsoft-IIS/7.5 X-Powered-By: UrlRewriter.NET 2.0.0 X-AspNet-Version: 4.0.30319 Date: Tue, 01 Jun 2010 06:59:16 GMT Content-Length: 775 It looks like it's asking to cache it but it's not :( Server is a localhost IIS7.5 on Win7. (as listed in the Response data). Can anyone see what I'm doing wrong?

    Read the article

< Previous Page | 545 546 547 548 549 550 551 552 553 554 555 556  | Next Page >