Search Results

Search found 2190 results on 88 pages for 'pg dump'.

Page 82/88 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • How I do VCS

    - by Wes McClure
    After years of dabbling with different version control systems and techniques, I wanted to share some of what I like and dislike in a few blog posts.  To start this out, I want to talk about how I use VCS in a team environment.  These come in a series of tips or best practices that I try to follow.  Note: This list is subject to change in the future. Always use some form of version control for all aspects of software development. Development is an evolution.  Looking back at where we were is an invaluable asset in that process.  This includes data schemas and documentation. Reverting / reapplying changes is absolutely critical for efficient development. The tools I use: Code: Hg (preferred), SVN Database: TSqlMigrations Documents: Sometimes in code repository, also SharePoint with versioning Always tag a commit (changeset) with comments This is a quick way to describe to someone else (or your future self) what the changeset entails. Be brief but courteous. One or two sentences about the task, not the actual changes. Use precommit hooks or setup the central repository to reject changes without comments. Link changesets to documentation If your project management system integrates with version control, or has a way to externally reference stories, tasks etc then leave a reference in the commit.  This helps locate more information about the commit and/or related changesets. It’s best to have a precommit hook or system that requires this information, otherwise it’s easy to forget. Ability to work offline is required, including commits and history Yes this requires a DVCS locally but doesn’t require the central repository to be a DVCS.  I prefer to use either Git or Hg but if it isn’t possible to migrate the central repository, it’s still possible for a developer to push / pull changes to that repository from a local Hg or Git repository. Never lock resources (files) in a central repository… Rude! We have merge tools for a reason, merging sucked a long time ago, it doesn’t anymore… stop locking files! This is unproductive, rude and annoying to other team members. Always review everything in your commit. Never ever commit a set of files without reviewing the changes in each. Never add a file without asking yourself, deep down inside, does this belong? If you leave to make changes during a review, start the review over when you come back.  Never assume you didn’t touch a file, double check. This is another reason why you want to avoid large, infrequent commits. Requirements for tools Quickly show pending changes for the entire repository. Default action for a resource with pending changes is a diff. Pluggable diff & merge tool Produce a unified diff or a diff of all changes.  This is helpful to bulk review changes instead of opening each file. The central repository is not your own personal dump yard.  Breaking this rule is a sure fire way to get the F bomb dropped in front of your name, multiple times. If you turn on Visual Studio’s commit on closing studio option, I will personally break your fingers. By the way, the person(s) in charge of this feature should be fired and never be allowed near programming, ever again. Commit (integrate) to the central repository / branch frequently I try to do this before leaving each day, especially without a DVCS.  One never knows when they might need to work from remote the following day. Never commit commented out code If it isn’t needed anymore, delete it! If you aren’t sure if it might be useful in the future, delete it! This is why we have history. If you don’t know why it’s commented out, figure it out and then either uncomment it or delete it. Don’t commit build artifacts, user preferences and temporary files. Build artifacts do not belong in VCS, everything in them is present in the code. (ie: bin\*, obj\*, *.dll, *.exe) User preferences are your settings, stop overriding my preferences files! (ie: *.suo and *.user files) Most tools allow you to ignore certain files and Hg/Git allow you to version this as an ignore file.  Set this up as a first step when creating a new repository! Be polite when merging unresolved conflicts. Count to 10, cuss, grab a stress ball and realize it’s not a big deal.  Actually, it’s an opportunity to let you know that someone else is working in the same area and you might want to communicate with them. Following the other rules, especially committing frequently, will reduce the likelihood of this. Suck it up, we all have to deal with this unintended consequence at times.  Just be careful and GET FAMILIAR with your merge tool.  It’s really not as scary as you think.  I personally prefer KDiff3 as its merging capabilities rock. Don’t blindly merge and then blindly commit your changes, this is rude and unprofessional.  Make sure you understand why the conflict occurred and which parts of the code you want to keep.  Apply scrutiny when you commit a manual merge: review the diff! Make sure you test the changes (build and run automated tests) Become intimate with your version control system and the tools you use with it. Avoid trial and error as much as is possible, sit down and test the tool out, read some tutorials etc.  Create test repositories and walk through common scenarios. Find the most efficient way to do your work.  These tools will be used repetitively, so inefficiencies will add up. Sometimes this involves a mix of tools, both GUI and CLI. I like a combination of both Tortoise Hg and hg cli to get the job efficiently. Always tag releases Create a way to find a given release, whether this be in comments or an explicit tag / branch.  This should be readily discoverable. Create release branches to patch bugs and then merge the changes back to other development branch(es). If using feature branches, strive for periodic integrations. Feature branches often cause forked code that becomes irreconcilable.  Strive to re-integrate somewhat frequently with the branch this code will ultimately be merged into.  This will avoid merge conflicts in the future. Feature branches are best when they are mutually exclusive of active development in other branches. Use and abuse local commits , at least one per task in a story. This builds a trail of changes in your local repository that can be pushed to a central repository when the story is complete. Never commit a broken build or failing tests to the central repository. It’s ok for a local commit to break the build and/or tests.  In fact, I encourage this if it helps group the changes more logically.  This is one of the main reasons I got excited about DVCS, when I wanted more than one changeset for a set of pending changes but some files could be grouped into both changesets (like solution file / project file changes). If you have more than a dozen outstanding changed resources, there should probably be more than one commit involved. Exceptions when maintaining code bases that require shotgun surgery, in this case, it’s a design smell :) Don’t version sensitive information Especially usernames / passwords   There is one area I haven’t found a solution I like yet: versioning 3rd party libraries and/or code.  I really dislike keeping any assemblies in the repository, but seems to be a common practice for external libraries.  Please feel free to share your ideas about this below.    -Wes

    Read the article

  • How to solve exception_priv _instruction exception while running destop project? [on hold]

    - by Haritha
    While running desktop project im getting exception_priv _instruction how to solve this??? while running this page is coming # # A fatal error has been detected by the Java Runtime Environment: # # EXCEPTION_PRIV_INSTRUCTION (0xc0000096) at pc=0x02f5a92b, pid=3012, tid=3104 # # JRE version: 7.0-b147 # Java VM: Java HotSpot(TM) Client VM (21.0-b17 mixed mode, sharing windows-x86 ) # Problematic frame: # C 0x02f5a92b # # Failed to write core dump. Minidumps are not enabled by default on client versions of Windows # # If you would like to submit a bug report, please visit: # http://bugreport.sun.com/bugreport/crash.jsp # The crash happened outside the Java Virtual Machine in native code. # See problematic frame for where to report the bug. # --------------- T H R E A D --------------- Current thread (0x02f5a800): JavaThread "LWJGL Application" [_thread_in_native, id=3104, stack(0x076f0000,0x07740000)] siginfo: ExceptionCode=0xc0000096 Registers: EAX=0x000df4f0, EBX=0x32afc180, ECX=0x000df4f0, EDX=0x00000020 ESP=0x0773f768, EBP=0x0773f790, ESI=0x32afc180, EDI=0x02f5a800 EIP=0x02f5a92b, EFLAGS=0x00010206 Top of Stack: (sp=0x0773f768) 0x0773f768: 02bd429c 02bd429c 0773f770 32afc180 0x0773f778: 0773f7b8 32b022c8 00000000 32afc180 0x0773f788: 00000000 0773f7a0 0773f7dc 00943187 0x0773f798: 229ec1c0 00948839 69081736 00000000 0x0773f7a8: 089b0048 00000000 00000014 00001406 0x0773f7b8: 00000002 0773f7bc 32afbeb0 0773f7f8 0x0773f7c8: 32b022c8 00000000 32afbf00 0773f7a0 0x0773f7d8: 0773f7f0 0773f81c 00943187 69081736 Instructions: (pc=0x02f5a92b) 0x02f5a90b: 00 43 00 00 00 00 f0 bc 02 e8 00 e9 22 40 f7 73 0x02f5a91b: 07 85 a5 94 00 90 f7 73 07 50 cc a0 6d d8 49 c0 0x02f5a92b: 6d 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x02f5a93b: 00 00 00 00 00 00 00 00 00 08 80 3d 37 00 00 00 Register to memory mapping: EAX=0x000df4f0 is an unknown value EBX=0x32afc180 is an oop {method} - klass: {other class} ECX=0x000df4f0 is an unknown value EDX=0x00000020 is an unknown value ESP=0x0773f768 is pointing into the stack for thread: 0x02f5a800 EBP=0x0773f790 is pointing into the stack for thread: 0x02f5a800 ESI=0x32afc180 is an oop {method} - klass: {other class} EDI=0x02f5a800 is a thread Stack: [0x076f0000,0x07740000], sp=0x0773f768, free space=317k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) C 0x02f5a92b j org.lwjgl.opengl.GL11.glVertexPointer(IILjava/nio/FloatBuffer;)V+48 j com.badlogic.gdx.backends.lwjgl.LwjglGL10.glVertexPointer(IIILjava/nio/Buffer;)V+53 j com.badlogic.gdx.graphics.glutils.VertexArray.bind()V+149 j com.badlogic.gdx.graphics.Mesh.bind()V+25 j com.badlogic.gdx.graphics.Mesh.render(IIIZ)V+32 j com.badlogic.gdx.graphics.Mesh.render(III)V+8 j com.badlogic.gdx.graphics.g2d.SpriteBatch.flush()V+197 j com.badlogic.gdx.graphics.g2d.SpriteBatch.switchTexture(Lcom/badlogic/gdx/graphics/Texture;)V+1 j com.badlogic.gdx.graphics.g2d.SpriteBatch.draw(Lcom/badlogic/gdx/graphics/Texture;FFFF)V+33 j sevenseas.game.WorldRenderer.drawBob()V+54 j sevenseas.game.WorldRenderer.render()V+12 j sevenseas.game.GameClass.render(F)V+38 j com.badlogic.gdx.Game.render()V+19 j com.badlogic.gdx.backends.lwjgl.LwjglApplication.mainLoop()V+642 j com.badlogic.gdx.backends.lwjgl.LwjglApplication$1.run()V+27 v ~StubRoutines::call_stub V [jvm.dll+0x122c7e] V [jvm.dll+0x1c9c0e] V [jvm.dll+0x122e73] V [jvm.dll+0x122ed7] V [jvm.dll+0xccd1f] V [jvm.dll+0x14433f] V [jvm.dll+0x171549] C [msvcr100.dll+0x5c6de] endthreadex+0x3a C [msvcr100.dll+0x5c788] endthreadex+0xe4 C [kernel32.dll+0xb713] GetModuleFileNameA+0x1b4 Java frames: (J=compiled Java code, j=interpreted, Vv=VM code) j org.lwjgl.opengl.GL11.nglVertexPointer(IIIJJ)V+0 j org.lwjgl.opengl.GL11.glVertexPointer(IILjava/nio/FloatBuffer;)V+48 j com.badlogic.gdx.backends.lwjgl.LwjglGL10.glVertexPointer(IIILjava/nio/Buffer;)V+53 j com.badlogic.gdx.graphics.glutils.VertexArray.bind()V+149 j com.badlogic.gdx.graphics.Mesh.bind()V+25 j com.badlogic.gdx.graphics.Mesh.render(IIIZ)V+32 j com.badlogic.gdx.graphics.Mesh.render(III)V+8 j com.badlogic.gdx.graphics.g2d.SpriteBatch.flush()V+197 j com.badlogic.gdx.graphics.g2d.SpriteBatch.switchTexture(Lcom/badlogic/gdx/graphics/Texture;)V+1 j com.badlogic.gdx.graphics.g2d.SpriteBatch.draw(Lcom/badlogic/gdx/graphics/Texture;FFFF)V+33 j sevenseas.game.WorldRenderer.drawBob()V+54 j sevenseas.game.WorldRenderer.render()V+12 j sevenseas.game.GameClass.render(F)V+38 j com.badlogic.gdx.Game.render()V+19 j com.badlogic.gdx.backends.lwjgl.LwjglApplication.mainLoop()V+642 j com.badlogic.gdx.backends.lwjgl.LwjglApplication$1.run()V+27 v ~StubRoutines::call_stub --------------- P R O C E S S --------------- Java Threads: ( => current thread ) 0x003d6c00 JavaThread "DestroyJavaVM" [_thread_blocked, id=3240, stack(0x008c0000,0x00910000)] =>0x02f5a800 JavaThread "LWJGL Application" [_thread_in_native, id=3104, stack(0x076f0000,0x07740000)] 0x02bcf000 JavaThread "Service Thread" daemon [_thread_blocked, id=2612, stack(0x02e00000,0x02e50000)] 0x02bc1000 JavaThread "C1 CompilerThread0" daemon [_thread_blocked, id=2776, stack(0x02db0000,0x02e00000)] 0x02bbf400 JavaThread "Attach Listener" daemon [_thread_blocked, id=2448, stack(0x02d60000,0x02db0000)] 0x02bbe000 JavaThread "Signal Dispatcher" daemon [_thread_blocked, id=1764, stack(0x02d10000,0x02d60000)] 0x02bb8000 JavaThread "Finalizer" daemon [_thread_blocked, id=3864, stack(0x02cc0000,0x02d10000)] 0x02bb3400 JavaThread "Reference Handler" daemon [_thread_blocked, id=2424, stack(0x02c70000,0x02cc0000)] Other Threads: 0x02bb1800 VMThread [stack: 0x02c20000,0x02c70000] [id=3076] 0x02bd1000 WatcherThread [stack: 0x02e50000,0x02ea0000] [id=3276] VM state:not at safepoint (normal execution) VM Mutex/Monitor currently owned by a thread: None Heap def new generation total 4928K, used 2571K [0x229c0000, 0x22f10000, 0x27f10000) eden space 4416K, 46% used [0x229c0000, 0x22bc2e38, 0x22e10000) from space 512K, 100% used [0x22e90000, 0x22f10000, 0x22f10000) to space 512K, 0% used [0x22e10000, 0x22e10000, 0x22e90000) tenured generation total 10944K, used 634K [0x27f10000, 0x289c0000, 0x329c0000) the space 10944K, 5% used [0x27f10000, 0x27faea60, 0x27faec00, 0x289c0000) compacting perm gen total 12288K, used 1655K [0x329c0000, 0x335c0000, 0x369c0000) the space 12288K, 13% used [0x329c0000, 0x32b5dc58, 0x32b5de00, 0x335c0000) ro space 10240K, 42% used [0x369c0000, 0x36dfc660, 0x36dfc800, 0x373c0000) rw space 12288K, 53% used [0x373c0000, 0x37a38180, 0x37a38200, 0x37fc0000) Code Cache [0x00940000, 0x009d8000, 0x02940000) total_blobs=305 nmethods=80 adapters=158 free_code_cache=32183Kb largest_free_block=32955904 Dynamic libraries: 0x00400000 - 0x0042f000 C:\Program Files\Java\jre7\bin\javaw.exe 0x7c900000 - 0x7c9af000 C:\WINDOWS\system32\ntdll.dll 0x7c800000 - 0x7c8f6000 C:\WINDOWS\system32\kernel32.dll 0x77dd0000 - 0x77e6b000 C:\WINDOWS\system32\ADVAPI32.dll 0x77e70000 - 0x77f02000 C:\WINDOWS\system32\RPCRT4.dll 0x77fe0000 - 0x77ff1000 C:\WINDOWS\system32\Secur32.dll 0x7e410000 - 0x7e4a1000 C:\WINDOWS\system32\USER32.dll 0x77f10000 - 0x77f59000 C:\WINDOWS\system32\GDI32.dll 0x773d0000 - 0x774d3000 C:\WINDOWS\WinSxS\x86_Microsoft.Windows.Common-Controls_6595b64144ccf1df_6.0.2600.5512_x-ww_35d4ce83\COMCTL32.dll 0x77c10000 - 0x77c68000 C:\WINDOWS\system32\msvcrt.dll 0x77f60000 - 0x77fd6000 C:\WINDOWS\system32\SHLWAPI.dll 0x76390000 - 0x763ad000 C:\WINDOWS\system32\IMM32.DLL 0x629c0000 - 0x629c9000 C:\WINDOWS\system32\LPK.DLL 0x74d90000 - 0x74dfb000 C:\WINDOWS\system32\USP10.dll 0x78aa0000 - 0x78b5e000 C:\Program Files\Java\jre7\bin\msvcr100.dll 0x6d940000 - 0x6dc61000 C:\Program Files\Java\jre7\bin\client\jvm.dll 0x71ad0000 - 0x71ad9000 C:\WINDOWS\system32\WSOCK32.dll 0x71ab0000 - 0x71ac7000 C:\WINDOWS\system32\WS2_32.dll 0x71aa0000 - 0x71aa8000 C:\WINDOWS\system32\WS2HELP.dll 0x76b40000 - 0x76b6d000 C:\WINDOWS\system32\WINMM.dll 0x76bf0000 - 0x76bfb000 C:\WINDOWS\system32\PSAPI.DLL 0x6d8d0000 - 0x6d8dc000 C:\Program Files\Java\jre7\bin\verify.dll 0x6d370000 - 0x6d390000 C:\Program Files\Java\jre7\bin\java.dll 0x6d920000 - 0x6d933000 C:\Program Files\Java\jre7\bin\zip.dll 0x6cec0000 - 0x6cf42000 C:\Documents and Settings\7stl0225\Local Settings\Temp\libgdx7stl0225\37fe1abc\gdx.dll 0x10000000 - 0x1004c000 C:\Documents and Settings\7stl0225\Local Settings\Temp\libgdx7stl0225\52d76f2b\lwjgl.dll 0x5ed00000 - 0x5edcc000 C:\WINDOWS\system32\OPENGL32.dll 0x68b20000 - 0x68b40000 C:\WINDOWS\system32\GLU32.dll 0x73760000 - 0x737ab000 C:\WINDOWS\system32\DDRAW.dll 0x73bc0000 - 0x73bc6000 C:\WINDOWS\system32\DCIMAN32.dll 0x77c00000 - 0x77c08000 C:\WINDOWS\system32\VERSION.dll 0x070b0000 - 0x07115000 C:\DOCUME~1\7stl0225\LOCALS~1\Temp\libgdx7stl0225\52d76f2b\OpenAL32.dll 0x7c9c0000 - 0x7d1d7000 C:\WINDOWS\system32\SHELL32.dll 0x774e0000 - 0x7761d000 C:\WINDOWS\system32\ole32.dll 0x5ad70000 - 0x5ada8000 C:\WINDOWS\system32\uxtheme.dll 0x76fd0000 - 0x7704f000 C:\WINDOWS\system32\CLBCATQ.DLL 0x77050000 - 0x77115000 C:\WINDOWS\system32\COMRes.dll 0x77120000 - 0x771ab000 C:\WINDOWS\system32\OLEAUT32.dll 0x73f10000 - 0x73f6c000 C:\WINDOWS\system32\dsound.dll 0x76c30000 - 0x76c5e000 C:\WINDOWS\system32\WINTRUST.dll 0x77a80000 - 0x77b15000 C:\WINDOWS\system32\CRYPT32.dll 0x77b20000 - 0x77b32000 C:\WINDOWS\system32\MSASN1.dll 0x76c90000 - 0x76cb8000 C:\WINDOWS\system32\IMAGEHLP.dll 0x72d20000 - 0x72d29000 C:\WINDOWS\system32\wdmaud.drv 0x72d10000 - 0x72d18000 C:\WINDOWS\system32\msacm32.drv 0x77be0000 - 0x77bf5000 C:\WINDOWS\system32\MSACM32.dll 0x77bd0000 - 0x77bd7000 C:\WINDOWS\system32\midimap.dll 0x73ee0000 - 0x73ee4000 C:\WINDOWS\system32\KsUser.dll 0x755c0000 - 0x755ee000 C:\WINDOWS\system32\msctfime.ime 0x69000000 - 0x691a9000 C:\WINDOWS\system32\sisgl.dll 0x73b30000 - 0x73b45000 C:\WINDOWS\system32\mscms.dll 0x73000000 - 0x73026000 C:\WINDOWS\system32\WINSPOOL.DRV 0x66e90000 - 0x66ed1000 C:\WINDOWS\system32\icm32.dll 0x07760000 - 0x0778d000 C:\Program Files\WordWeb\WHook.dll 0x74c80000 - 0x74cac000 C:\WINDOWS\system32\OLEACC.dll 0x76080000 - 0x760e5000 C:\WINDOWS\system32\MSVCP60.dll VM Arguments: jvm_args: -Dfile.encoding=Cp1252 java_command: sevenseas.game.MainDesktop Launcher Type: SUN_STANDARD Environment Variables: PATH=C:/Program Files/Java/jre7/bin/client;C:/Program Files/Java/jre7/bin;C:/Program Files/Java/jre7/lib/i386;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\Program Files\Java\jdk1.7.0\bin;C:\eclipse; USERNAME=7stl0225 OS=Windows_NT PROCESSOR_IDENTIFIER=x86 Family 15 Model 4 Stepping 1, GenuineIntel --------------- S Y S T E M --------------- OS: Windows XP Build 2600 Service Pack 3 CPU:total 1 (1 cores per cpu, 1 threads per core) family 15 model 4 stepping 1, cmov, cx8, fxsr, mmx, sse, sse2, sse3 Memory: 4k page, physical 2031088k(939252k free), swap 3969920k(3011396k free) vm_info: Java HotSpot(TM) Client VM (21.0-b17) for windows-x86 JRE (1.7.0-b147), built on Jun 27 2011 02:25:52 by "java_re" with unknown MS VC++:1600 time: Sat Oct 26 12:35:14 2013 elapsed time: 0 seconds

    Read the article

  • Hard drive mounted at / , duplicate mounted hard drive after using MountManager

    - by HellHarvest
    possible duplicate post I'm running 12.04 64bit. My system is a dual boot for both Ubuntu and Windows7. Both operating systems are sharing the drive named "Elements". My volume named "Elements" is a 1TB SATA NTFS hard drive that shows up twice in the side bar in nautilus. One of the icons is functional and even has the convenient "eject" icon next to it. Below is a picture of the left menu in Nautilus, with System Monitor-File Systems tab open on top of it. Can someone advise me about how to get rid of this extra icon? I think the problem is much more deep-rooted than just a GUI glitch on Nautilus' part. The other icon does nothing but spit out the following error when I click on it (image below). This only happened AFTER I tried using Mount Manager to automate mounting the drive at start up. I've already uninstalled Mount Manager, and restarted, but the problem didn't go away. The hard drive does mount automatically now, so I guess that's cool. But now, every time I boot up now and open Nautilus, BOTH of these icons appear, one of which is fictitious and useless. According to the image above and the outputs of several other commands, it appears to be mounted at / In which case, no matter where I am in Nautilus when I try to click on that icon, of course it will tell me that that drive is in use by another program... Nautilus. I'm afraid of trying to unmount this hard drive (sdb6) because of where it appears to be mounted. I'm kind of a noob, and I have this gut feeling that tells me trying to unmount a drive at / will destroy my entire file system. This fear was further strengthened by the output of "$ fsck" at the very bottom of this post. Error immediately below when that 2nd "Elements" hard drive is clicked in Nautilus: Unable to mount Elements Mount is denied because the NTFS volume is already exclusively opened. The volume may be already mounted, or another software may use it which could be identified for example by the help of the 'fuser' command. It's odd to me that that error message above claims that it's an NTFS volume when everything else tell me that it's an ext4 volume. The actual hard drive "Elements" is in fact an NTFS volume. Here's the output of a few commands and configuration files that may be of interest: $ fuser -a / /: 2120r 2159rc 2160rc 2172r 2178rc 2180rc 2188r 2191rc 2200rc 2203rc 2205rc 2206r 2211r 2212r 2214r 2220r 2228r 2234rc 2246rc 2249rc 2254rc 2260rc 2261r 2262r 2277rc 2287rc 2291rc 2311rc 2313rc 2332rc 2334rc 2339rc 2343rc 2344rc 2352rc 2372rc 2389rc 2422r 2490r 2496rc 2501rc 2566r 2573rc 2581rc 2589rc 2592r 2603r 2611rc 2613rc 2615rc 2678rc 2927r 2981r 3104rc 4156rc 4196rc 4206rc 4213rc 4240rc 4297rc 5032rc 7609r 7613r 7648r 9593rc 18829r 18833r 19776r $ sudo df -h Filesystem Size Used Avail Use% Mounted on /dev/sdb6 496G 366G 106G 78% / udev 2.0G 4.0K 2.0G 1% /dev tmpfs 791M 1.5M 790M 1% /run none 5.0M 0 5.0M 0% /run/lock none 2.0G 672K 2.0G 1% /run/shm /dev/sda1 932G 312G 620G 34% /media/Elements /home/solderblob/.Private 496G 366G 106G 78% /home/solderblob /dev/sdb2 188G 100G 88G 54% /media/A2B24EACB24E852F /dev/sdb1 100M 25M 76M 25% /media/System Reserved $ sudo fdisk -l Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00093cab Device Boot Start End Blocks Id System /dev/sda1 2048 1953519615 976758784 7 HPFS/NTFS/exFAT Disk /dev/sdb: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e8d9b Device Boot Start End Blocks Id System /dev/sdb1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sdb2 206848 392378768 196085960+ 7 HPFS/NTFS/exFAT /dev/sdb3 392380414 1465147391 536383489 5 Extended /dev/sdb5 1456762880 1465147391 4192256 82 Linux swap / Solaris /dev/sdb6 392380416 1448374271 527996928 83 Linux /dev/sdb7 1448376320 1456758783 4191232 82 Linux swap / Solaris Partition table entries are not in disk order $ cat /etc/fstab # <file system> <mount point> <type> <options> <dump> <pass> UUID=77039a2a-83d4-47a1-8a8c-a2ec4e4dfd0e / ext4 defaults 0 1 UUID=F6549CC4549C88CF /media/Elements ntfs-3g users 0 0 $ sudo blkid /dev/sda1: LABEL="Elements" UUID="F6549CC4549C88CF" TYPE="ntfs" /dev/sdb1: LABEL="System Reserved" UUID="5CDE130FDE12E156" TYPE="ntfs" /dev/sdb2: UUID="A2B24EACB24E852F" TYPE="ntfs" /dev/sdb6: UUID="77039a2a-83d4-47a1-8a8c-a2ec4e4dfd0e" TYPE="ext4" $ sudo blkid -c /dev/null (appears to be exactly the same as above) /dev/sda1: LABEL="Elements" UUID="F6549CC4549C88CF" TYPE="ntfs" /dev/sdb1: LABEL="System Reserved" UUID="5CDE130FDE12E156" TYPE="ntfs" /dev/sdb2: UUID="A2B24EACB24E852F" TYPE="ntfs" /dev/sdb6: UUID="77039a2a-83d4-47a1-8a8c-a2ec4e4dfd0e" TYPE="ext4" $ mount /dev/sdb6 on / type ext4 (rw) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) /dev/sda1 on /media/Elements type fuseblk (rw,noexec,nosuid,nodev,allow_other,blksize=4096) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev) /home/solderblob/.Private on /home/solderblob type ecryptfs (ecryptfs_check_dev_ruid,ecryptfs_cipher=aes,ecryptfs_key_bytes=16,ecryptfs_unlink_sigs,ecryptfs_sig=76a47b0175afa48d,ecryptfs_fnek_sig=391b2d8b155215f7) gvfs-fuse-daemon on /home/solderblob/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=solderblob) /dev/sdb2 on /media/A2B24EACB24E852F type fuseblk (rw,nosuid,nodev,allow_other,default_permissions,blksize=4096) /dev/sdb1 on /media/System Reserved type fuseblk (rw,nosuid,nodev,allow_other,default_permissions,blksize=4096) $ ls -a . A2B24EACB24E852F Ubuntu 12.04.1 LTS amd64 .. Elements System Reserved $ cat /proc/mounts rootfs / rootfs rw 0 0 sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0 proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0 udev /dev devtmpfs rw,relatime,size=2013000k,nr_inodes=503250,mode=755 0 0 devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0 tmpfs /run tmpfs rw,nosuid,relatime,size=809872k,mode=755 0 0 /dev/disk/by-uuid/77039a2a-83d4-47a1-8a8c-a2ec4e4dfd0e / ext4 rw,relatime,user_xattr,acl,barrier=1,data=ordered 0 0 none /sys/fs/fuse/connections fusectl rw,relatime 0 0 none /sys/kernel/debug debugfs rw,relatime 0 0 none /sys/kernel/security securityfs rw,relatime 0 0 none /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0 none /run/shm tmpfs rw,nosuid,nodev,relatime 0 0 /dev/sda1 /media/Elements fuseblk rw,nosuid,nodev,noexec,relatime,user_id=0,group_id=0,allow_other,blksize=4096 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,nosuid,nodev,noexec,relatime 0 0 /home/solderblob/.Private /home/solderblob ecryptfs rw,relatime,ecryptfs_fnek_sig=391b2d8b155215f7,ecryptfs_sig=76a47b0175afa48d,ecryptfs_cipher=aes,ecryptfs_key_bytes=16,ecryptfs_unlink_sigs 0 0 gvfs-fuse-daemon /home/solderblob/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0 /dev/sdb2 /media/A2B24EACB24E852F fuseblk rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096 0 0 /dev/sdb1 /media/System\040Reserved fuseblk rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096 0 0 gvfs-fuse-daemon /root/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,relatime,user_id=0,group_id=0 0 0 $ fsck fsck from util-linux 2.20.1 e2fsck 1.42 (29-Nov-2011) /dev/sdb6 is mounted. WARNING!!! The filesystem is mounted. If you continue you ***WILL*** cause ***SEVERE*** filesystem damage. Do you really want to continue<n>? no check aborted.

    Read the article

  • Service Broker, not ETL

    - by jamiet
    I have been very quiet on this blog of late and one reason for that is I have been very busy on a client project that I would like to talk about a little here. The client that I have been working for has a website that runs on a distributed architecture utilising a messaging infrastructure for communication between different endpoints. My brief was to build a system that could consume these messages and produce analytical information in near-real-time. More specifically I basically had to deliver a data warehouse however it was the real-time aspect of the project that really intrigued me. This real-time requirement meant that using an Extract transformation, Load (ETL) tool was out of the question and so I had no choice but to write T-SQL code (i.e. stored-procedures) to process the incoming messages and load the data into the data warehouse. This concerned me though – I had no way to control the rate at which data would arrive into the system yet we were going to have end-users querying the system at the same time that those messages were arriving; the potential for contention in such a scenario was pretty high and and was something I wanted to minimise as much as possible. Moreover I did not want the processing of data inside the data warehouse to have any impact on the customer-facing website. As you have probably guessed from the title of this blog post this is where Service Broker stepped in! For those that have not heard of it Service Broker is a queuing technology that has been built into SQL Server since SQL Server 2005. It provides a number of features however the one that was of interest to me was the fact that it facilitates asynchronous data processing which, in layman’s terms, means the ability to process some data without requiring the system that supplied the data having to wait for the response. That was a crucial feature because on this project the customer-facing website (in effect an OLTP system) would be calling one of our stored procedures with each message – we did not want to cause the OLTP system to wait on us every time we processed one of those messages. This asynchronous nature also helps to alleviate the contention problem because the asynchronous processing activity is handled just like any other task in the database engine and hence can wait on another task (such as an end-user query). Service Broker it was then! The stored procedure called by the OLTP system would simply put the message onto a queue and we would use a feature called activation to pick each message off the queue in turn and process it into the warehouse. At the time of writing the system is not yet up to full capacity but so far everything seems to be working OK (touch wood) and crucially our users are seeing data in near-real-time. By near-real-time I am talking about latencies of a few minutes at most and to someone like me who is used to building systems that have overnight latencies that is a huge step forward! So then, am I advocating that you all go out and dump your ETL tools? Of course not, no! What this project has taught me though is that in certain scenarios there may be better ways to implement a data warehouse system then the traditional “load data in overnight” approach that we are all used to. Moreover I have really enjoyed getting to grips with a new technology and even if you don’t want to use Service Broker you might want to consider asynchronous messaging architectures for your BI/data warehousing solutions in the future. This has been a very high level overview of my use of Service Broker and I have deliberately left out much of the minutiae of what has been a very challenging implementation. Nonetheless I hope I have caused you to reflect upon your own approaches to BI and question whether other approaches may be more tenable. All comments and questions gratefully received! Lastly, if you have never used Service Broker before and want to kick the tyres I have provided below a very simple “Service Broker Hello World” script that will create all of the objects required to facilitate Service Broker communications and then send the message “Hello World” from one place to anther! This doesn’t represent a “proper” implementation per se because it doesn’t close down down conversation objects (which you should always do in a real-world scenario) but its enough to demonstrate the capabilities! @Jamiet ----------------------------------------------------------------------------------------------- /*This is a basic Service Broker Hello World app. Have fun! -Jamie */ USE MASTER GO CREATE DATABASE SBTest GO --Turn Service Broker on! ALTER DATABASE SBTest SET ENABLE_BROKER GO USE SBTest GO -- 1) we need to create a message type. Note that our message type is -- very simple and allowed any type of content CREATE MESSAGE TYPE HelloMessage VALIDATION = NONE GO -- 2) Once the message type has been created, we need to create a contract -- that specifies who can send what types of messages CREATE CONTRACT HelloContract (HelloMessage SENT BY INITIATOR) GO --We can query the metadata of the objects we just created SELECT * FROM   sys.service_message_types WHERE name = 'HelloMessage'; SELECT * FROM   sys.service_contracts WHERE name = 'HelloContract'; SELECT * FROM   sys.service_contract_message_usages WHERE  service_contract_id IN (SELECT service_contract_id FROM sys.service_contracts WHERE name = 'HelloContract') AND        message_type_id IN (SELECT message_type_id FROM sys.service_message_types WHERE name = 'HelloMessage'); -- 3) The communication is between two endpoints. Thus, we need two queues to -- hold messages CREATE QUEUE SenderQueue CREATE QUEUE ReceiverQueue GO --more querying metatda SELECT * FROM sys.service_queues WHERE name IN ('SenderQueue','ReceiverQueue'); --we can also select from the queues as if they were tables SELECT * FROM SenderQueue   SELECT * FROM ReceiverQueue   -- 4) Create the required services and bind them to be above created queues CREATE SERVICE Sender   ON QUEUE SenderQueue CREATE SERVICE Receiver   ON QUEUE ReceiverQueue (HelloContract) GO --more querying metadata SELECT * FROM sys.services WHERE name IN ('Receiver','Sender'); -- 5) At this point, we can begin the conversation between the two services by -- sending messages DECLARE @conversationHandle UNIQUEIDENTIFIER DECLARE @message NVARCHAR(100) BEGIN   BEGIN TRANSACTION;   BEGIN DIALOG @conversationHandle         FROM SERVICE Sender         TO SERVICE 'Receiver'         ON CONTRACT HelloContract WITH ENCRYPTION=OFF   -- Send a message on the conversation   SET @message = N'Hello, World';   SEND  ON CONVERSATION @conversationHandle         MESSAGE TYPE HelloMessage (@message)   COMMIT TRANSACTION END GO --check contents of queues SELECT * FROM SenderQueue   SELECT * FROM ReceiverQueue   GO -- Receive a message from the queue RECEIVE CONVERT(NVARCHAR(MAX), message_body) AS MESSAGE FROM ReceiverQueue GO --If no messages were received and/or you can't see anything on the queues you may wish to check the following for clues: SELECT * FROM sys.transmission_queue -- Cleanup DROP SERVICE Sender DROP SERVICE Receiver DROP QUEUE SenderQueue DROP QUEUE ReceiverQueue DROP CONTRACT HelloContract DROP MESSAGE TYPE HelloMessage GO USE MASTER GO DROP DATABASE SBTest GO

    Read the article

  • Can Google Employees See My Saved Google Chrome Passwords?

    - by Jason Fitzpatrick
    Storing your passwords in your web browser seems like a great time saver, but are the passwords secure and inaccessible to others (even employees of the browser company) when squirreled away? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader MMA is curious if Google employees have (or could have) access to the passwords he stores in Google Chrome: I understand that we are really tempted to save our passwords in Google Chrome. The likely benefit is two fold, You don’t need to (memorize and) input those long and cryptic passwords. These are available wherever you are once you log in to your Google account. The last point sparked my doubt. Since the password is available anywhere, the storage must in some central location, and this should be at Google. Now, my simple question is, can a Google employee see my passwords? Searching over the Internet revealed several articles/messages. Do you save passwords in Chrome? Maybe you should reconsider: Talks about your passwords being stolen by someone who has access to your computer account. Nothing mentioned about the central storage security and vulnerability. There is even a response from Chrome browser security tech lead about the first issue. Chrome’s insane password security strategy: Mostly along the same line. You can steal password from somebody if you have access to the computer account. How to Steal Passwords Saved in Google Chrome in 5 Simple Steps: Teaches you how to actually perform the act mentioned in the previous two when you have access to somebody else’s account. There are many more (including this one at this site), mostly along the same line, points, counter-points, huge debates. I refrain from mentioning them here, simply carry a search if you want to find them. Coming back to my original query, can a Google employee see my password? Since I can view the password using a simple button, definitely they can be unhashed (decrypted) even if encrypted. This is very different from the passwords saved in Unix-like OS’s where the saved password can never be seen in plain text. They use a one-way encryption algorithm to encrypt your passwords. This encrypted password is then stored in the passwd or shadow file. When you attempt to login, the password you type in is encrypted again and compared with the entry in the file that stores your passwords. If they match, it must be the same password, and you are allowed access. Thus, a superuser can change my password, can block my account, but he can never see my password. So are his concerns well founded or will a little insight dispel his worry? The Answer SuperUser contributor Zeel helps put his mind at ease: Short answer: No* Passwords stored on your local machine can be decrypted by Chrome, as long as your OS user account is logged in. And then you can view those in plain text. At first this seems horrible, but how did you think auto-fill worked? When that password field gets filled in, Chrome must insert the real password into the HTML form element – or else the page wouldn’t work right, and you could not submit the form. And if the connection to the website is not over HTTPS, the plain text is then sent over the internet. In other words, if chrome can’t get the plain text passwords, then they are totally useless. A one way hash is no good, because we need to use them. Now the passwords are in fact encrypted, the only way to get them back to plain text is to have the decryption key. That key is your Google password, or a secondary key you can set up. When you sign into Chrome and sync the Google servers will transmit the encrypted passwords, settings, bookmarks, auto-fill, etc, to your local machine. Here Chrome will decrypt the information and be able to use it. On Google’s end all that info is stored in its encrpyted state, and they do not have the key to decrypt it. Your account password is checked against a hash to log in to Google, and even if you let chrome remember it, that encrypted version is hidden in the same bundle as the other passwords, impossible to access. So an employee could probably grab a dump of the encrypted data, but it wouldn’t do them any good, since they would have no way to use it.* So no, Google employees can not** access your passwords, since they are encrypted on their servers. * However, do not forget that any system that can be accessed by an authorized user can be accessed by an unauthorized user. Some systems are easier to break than other, but none are fail-proof. . . That being said, I think I will trust Google and the millions they spend on security systems, over any other password storage solution. And heck, I’m a wimpy nerd, it would be easier to beat the passwords out of me than break Google’s encryption. ** I am also assuming that there isn’t a person who just happens to work for Google gaining access to your local machine. In that case you are screwed, but employment at Google isn’t actually a factor any more. Moral: Hit Win + L before leaving machine. While we agree with zeel that it’s a pretty safe bet (as long as your computer is not compromised) that your passwords are in fact safe while stored in Chrome, we prefer to encrypt all our logins and passwords in a LastPass vault. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 4

    - by MarkPearl
    Learning Outcomes Explain the characteristics of memory systems Describe the memory hierarchy Discuss cache memory principles Discuss issues relevant to cache design Describe the cache organization of the Pentium Computer Memory Systems There are key characteristics of memory… Location – internal or external Capacity – expressed in terms of bytes Unit of Transfer – the number of bits read out of or written into memory at a time Access Method – sequential, direct, random or associative From a users perspective the two most important characteristics of memory are… Capacity Performance – access time, memory cycle time, transfer rate The trade off for memory happens along three axis… Faster access time, greater cost per bit Greater capacity, smaller cost per bit Greater capacity, slower access time This leads to people using a tiered approach in their use of memory   As one goes down the hierarchy, the following occurs… Decreasing cost per bit Increasing capacity Increasing access time Decreasing frequency of access of the memory by the processor The use of two levels of memory to reduce average access time works in principle, but only if conditions 1 to 4 apply. A variety of technologies exist that allow us to accomplish this. Thus it is possible to organize data across the hierarchy such that the percentage of accesses to each successively lower level is substantially less than that of the level above. A portion of main memory can be used as a buffer to hold data temporarily that is to be read out to disk. This is sometimes referred to as a disk cache and improves performance in two ways… Disk writes are clustered. Instead of many small transfers of data, we have a few large transfers of data. This improves disk performance and minimizes processor involvement. Some data designed for write-out may be referenced by a program before the next dump to disk. In that case the data is retrieved rapidly from the software cache rather than slowly from disk. Cache Memory Principles Cache memory is substantially faster than main memory. A caching system works as follows.. When a processor attempts to read a word of memory, a check is made to see if this in in cache memory… If it is, the data is supplied, If it is not in the cache, a block of main memory, consisting of a fixed number of words is loaded to the cache. Because of the phenomenon of locality of references, when a block of data is fetched into the cache, it is likely that there will be future references to that same memory location or to other words in the block. Elements of Cache Design While there are a large number of cache implementations, there are a few basic design elements that serve to classify and differentiate cache architectures… Cache Addresses Cache Size Mapping Function Replacement Algorithm Write Policy Line Size Number of Caches Cache Addresses Almost all non-embedded processors support virtual memory. Virtual memory in essence allows a program to address memory from a logical point of view without needing to worry about the amount of physical memory available. When virtual addresses are used the designer may choose to place the cache between the MMU (memory management unit) and the processor or between the MMU and main memory. The disadvantage of virtual memory is that most virtual memory systems supply each application with the same virtual memory address space (each application sees virtual memory starting at memory address 0), which means the cache memory must be completely flushed with each application context switch or extra bits must be added to each line of the cache to identify which virtual address space the address refers to. Cache Size We would like the size of the cache to be small enough so that the overall average cost per bit is close to that of main memory alone and large enough so that the overall average access time is close to that of the cache alone. Also, larger caches are slightly slower than smaller ones. Mapping Function Because there are fewer cache lines than main memory blocks, an algorithm is needed for mapping main memory blocks into cache lines. The choice of mapping function dictates how the cache is organized. Three techniques can be used… Direct – simplest technique, maps each block of main memory into only one possible cache line Associative – Each main memory block to be loaded into any line of the cache Set Associative – exhibits the strengths of both the direct and associative approaches while reducing their disadvantages For detailed explanations of each approach – read the text book (page 148 – 154) Replacement Algorithm For associative and set associating mapping a replacement algorithm is needed to determine which of the existing blocks in the cache must be replaced by a new block. There are four common approaches… LRU (Least recently used) FIFO (First in first out) LFU (Least frequently used) Random selection Write Policy When a block resident in the cache is to be replaced, there are two cases to consider If no writes to that block have happened in the cache – discard it If a write has occurred, a process needs to be initiated where the changes in the cache are propagated back to the main memory. There are several approaches to achieve this including… Write Through – all writes to the cache are done to the main memory as well at the point of the change Write Back – when a block is replaced, all dirty bits are written back to main memory The problem is complicated when we have multiple caches, there are techniques to accommodate for this but I have not summarized them. Line Size When a block of data is retrieved and placed in the cache, not only the desired word but also some number of adjacent words are retrieved. As the block size increases from very small to larger sizes, the hit ratio will at first increase because of the principle of locality, which states that the data in the vicinity of a referenced word are likely to be referenced in the near future. As the block size increases, more useful data are brought into cache. The hit ratio will begin to decrease as the block becomes even bigger and the probability of using the newly fetched information becomes less than the probability of using the newly fetched information that has to be replaced. Two specific effects come into play… Larger blocks reduce the number of blocks that fit into a cache. Because each block fetch overwrites older cache contents, a small number of blocks results in data being overwritten shortly after they are fetched. As a block becomes larger, each additional word is farther from the requested word and therefore less likely to be needed in the near future. The relationship between block size and hit ratio is complex, and no set approach is judged to be the best in all circumstances.   Pentium 4 and ARM cache organizations The processor core consists of four major components: Fetch/decode unit – fetches program instruction in order from the L2 cache, decodes these into a series of micro-operations, and stores the results in the L2 instruction cache Out-of-order execution logic – Schedules execution of the micro-operations subject to data dependencies and resource availability – thus micro-operations may be scheduled for execution in a different order than they were fetched from the instruction stream. As time permits, this unit schedules speculative execution of micro-operations that may be required in the future Execution units – These units execute micro-operations, fetching the required data from the L1 data cache and temporarily storing results in registers Memory subsystem – This unit includes the L2 and L3 caches and the system bus, which is used to access main memory when the L1 and L2 caches have a cache miss and to access the system I/O resources

    Read the article

  • Different Not Automatically Implies Better

    - by Alois Kraus
    Originally posted on: http://geekswithblogs.net/akraus1/archive/2013/11/05/154556.aspxRecently I was digging deeper why some WCF hosted workflow application did consume quite a lot of memory although it did basically only load a xaml workflow. The first tool of choice is Process Explorer or even better Process Hacker (has more options and the best feature copy&paste does work). The three most important numbers of a process with regards to memory are Working Set, Private Working Set and Private Bytes. Working set is the currently consumed physical memory (parts can be shared between processes e.g. loaded dlls which are read only) Private Working Set is the physical memory needed by this process which is not shareable Private Bytes is the number of non shareable which is only visible in the current process (e.g. all new, malloc, VirtualAlloc calls do create private bytes) When you have a bigger workflow it can consume under 64 bit easily 500MB for a 1-2 MB xaml file. This does not look very scalable. Under 64 bit the issue is excessive private bytes consumption and not the managed heap. The picture is quite different for 32 bit which looks a bit strange but it seems that the hosted VB compiler is a lot less memory hungry under 32 bit. I did try to repro the issue with a medium sized xaml file (400KB) which does contain 1000 variables and 1000 if which can be represented by C# code like this: string Var1; string Var2; ... string Var1000; if (!String.IsNullOrEmpty(Var1) ) { Console.WriteLine(“Var1”); } if (!String.IsNullOrEmpty(Var2) ) { Console.WriteLine(“Var2”); } ....   Since WF is based on VB.NET expressions you are bound to the hosted VB.NET compiler which does result in (x64) 140 MB of private bytes which is ca. 140 KB for each if clause which is quite a lot if you think about the actually present functionality. But there is hope. .NET 4.5 does allow now C# expressions for WF which is a major step forward for all C# lovers. I did create some simple patcher to “cross compile” my xaml to C# expressions. Lets look at the result: C# Expressions VB Expressions x86 x86 On my home machine I have only 32 bit which gives you quite exactly half of the memory consumption under 64 bit. C# expressions are 10 times more memory hungry than VB.NET expressions! I wanted to do more with less memory but instead it did consume a magnitude more memory. That is surprising to say the least. The workflow does initialize in about the same time under x64 and x86 where the VB code does it in 2s whereas the C# version needs 18s. Also nearly ten times slower. That is a too high price to pay for any bigger sized xaml workflow to convert from VB.NET to C# expressions. If I do reduce the number of expressions to 500 then it does need 400MB which is about half of the memory. It seems that the cost per if does rise linear with the number of total expressions in a xaml workflow.  Expression Language Cost per IF Startup Time C# 1000 Ifs x64 1,5 MB 18s C# 500 Ifs x64 750 KB 9s VB 1000 Ifs x64 140 KB 2s VB 500 Ifs x64 70 KB 1s Now we can directly compare two MS implementations. It is clear that the VB.NET compiler uses the same underlying structure but it has much higher offset compared to the highly inefficient C# expression compiler. I have filed a connect bug here with a harsher wording about recent advances in memory consumption. The funniest thing is that one MS employee did give an Azure AppFabric demo around early 2011 which was so slow that he needed to investigate with xperf. He was after startup time and the call stacks with regards to VB.NET expression compilation were remarkably similar. In fact I only found this post by googling for parts of my call stacks. … “C# expressions will be coming soon to WF, and that will have different performance characteristics than VB” … What did he know Jan 2011 what I did no know until today? ;-). He knew that C# expression will come but that they will not be automatically have better footprint. It is about time to fix that. In its current state C# expressions are not usable for bigger workflows. That also explains the headline for today. You can cheat startup time by prestarting workflows so that the demo looks nice and snappy but it does hurt scalability a lot since you do need much more memory than necessary. I did find the stacks by enabling virtual allocation tracking within XPerf which is still the best tool out there. But first you need to look at your process to check where the memory is hiding: For the C# Expression compiler you do not need xperf. You can directly dump the managed heap and check with a profiler of your choice. But if the allocations are happening on the Private Data ( VirtualAlloc ) you can find it with xperf. There is a nice video on channel 9 explaining VirtualAlloc tracking it in greater detail. If your data allocations are on the Heap it does mean that the C/C++ runtime did create a heap for you where all malloc, new calls do allocate from it. You can enable heap tracing with xperf and full call stack support as well which is doable via xperf like it is shown also on channel 9. Or you can use WPRUI directly: To make “Heap Usage” it work you need to set for your executable the tracing flags (before you start it). For example devenv.exe HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\devenv.exe DWORD TracingFlags 1 Do not forget to disable it after you did complete profiling the process or it will impact the startup time quite a lot. You can with xperf attach directly to a running process and collect heap allocation information from a gone wild process. Very handy if you need to find out what a process was doing which has arrived in a funny state. “VirtualAlloc usage” does work without explicitly enabling stuff for a specific process and is always on machine wide. I had issues on my Windows 7 machines with the call stack collection and the latest Windows 8.1 Performance Toolkit. I was told that WPA from Windows 8.0 should work fine but I do not want to downgrade.

    Read the article

  • Form contents not showing in email

    - by fmz
    This is a followup to a question I posted yesterday. I thought everything was working fine, but today, I am not getting any results in the email from the drop down field. Here is the form code in question: <label for="purpose"><span class="required">*</span> Purpose</label> <select id="purpose" name="purpose" style="width: 300px; height:35px;"> <option value="" selected="selected">-- Select One --</option> <option value="I am interested in your services">I am interested in your services!</option> <option value="I am interested in a partnership">I am interested in a partnership!</option> <option value="I am interested in a job">I am interested in a job!</option> </select> It is then processed in PHP and should output the selected option to an email, however the Reason for Contact line always comes through with nothing in it. Here is the PHP code: <?php if(!$_POST) exit; $name = $_POST['name']; $company = $_POST['company']; $email = $_POST['email']; $phone = $_POST['phone']; $purpose = $_POST['purpose']; $comments = $_POST['comments']; $verify = $_POST['verify']; if(trim($name) == '') { echo '<div class="error_message">Attention! You must enter your name.</div>'; exit(); } else if(trim($email) == '') { echo '<div class="error_message">Attention! Please enter a valid email address.</div>'; exit(); } else if(trim($phone) == '') { echo '<div class="error_message">Attention! Please enter a valid phone number.</div>'; exit(); } else if(!isEmail($email)) { echo '<div class="error_message">Attention! You have enter an invalid e-mail address, try again.</div>'; exit(); } if(trim($comments) == '') { echo '<div class="error_message">Attention! Please enter your message.</div>'; exit(); } else if(trim($verify) == '') { echo '<div class="error_message">Attention! Please enter the verification number.</div>'; exit(); } else if(trim($verify) != '4') { echo '<div class="error_message">Attention! The verification number you entered is incorrect.</div>'; exit(); } if($error == '') { if(get_magic_quotes_gpc()) { $comments = stripslashes($comments); } // Configuration option. // Enter the email address that you want to emails to be sent to. // Example $address = "[email protected]"; $address = "[email protected]"; // Configuration option. // i.e. The standard subject will appear as, "You've been contacted by John Doe." // Example, $e_subject = '$name . ' has contacted you via Your Website.'; $e_subject = 'You\'ve been contacted by ' . $name . '.'; // Configuration option. // You can change this if you feel that you need to. // Developers, you may wish to add more fields to the form, in which case you must be sure to add them here. $e_body = "You have been contacted by $name.\r\n\n"; $e_company = "Company: $company\r\n\n"; $e_content = "Comments: \"$comments\"\r\n\n"; $e_purpose = "Reason for contact: $purpose\r\n\n"; $e_reply = "You can contact $name via email, $email or via phone $phone"; $msg = $e_body . $e_content . $e_company . $e_purpose . $e_reply; if(mail($address, $e_subject, $msg, "From: $email\r\nReply-To: $email\r\nReturn-Path: $email\r\n")) { // Email has sent successfully, echo a success page. echo "<fieldset>"; echo "<div id='success_page'>"; echo "<h1>Email Sent Successfully.</h1>"; echo "<p>Thank you <strong>$name</strong>, your message has been submitted to us.</p>"; echo "</div>"; echo "</fieldset>"; } else { echo 'ERROR!'; } } function isEmail($email) { // Email address verification, do not edit. return(preg_match("/^[-_.[:alnum:]]+@((([[:alnum:]]|[[:alnum:]][[:alnum:]-]*[[:alnum:]])\.)+(ad|ae|aero|af|ag|ai|al|am|an|ao|aq|ar|arpa|as|at|au|aw|az|ba|bb|bd|be|bf|bg|bh|bi|biz|bj|bm|bn|bo|br|bs|bt|bv|bw|by|bz|ca|cc|cd|cf|cg|ch|ci|ck|cl|cm|cn|co|com|coop|cr|cs|cu|cv|cx|cy|cz|de|dj|dk|dm|do|dz|ec|edu|ee|eg|eh|er|es|et|eu|fi|fj|fk|fm|fo|fr|ga|gb|gd|ge|gf|gh|gi|gl|gm|gn|gov|gp|gq|gr|gs|gt|gu|gw|gy|hk|hm|hn|hr|ht|hu|id|ie|il|in|info|int|io|iq|ir|is|it|jm|jo|jp|ke|kg|kh|ki|km|kn|kp|kr|kw|ky|kz|la|lb|lc|li|lk|lr|ls|lt|lu|lv|ly|ma|mc|md|mg|mh|mil|mk|ml|mm|mn|mo|mp|mq|mr|ms|mt|mu|museum|mv|mw|mx|my|mz|na|name|nc|ne|net|nf|ng|ni|nl|no|np|nr|nt|nu|nz|om|org|pa|pe|pf|pg|ph|pk|pl|pm|pn|pr|pro|ps|pt|pw|py|qa|re|ro|ru|rw|sa|sb|sc|sd|se|sg|sh|si|sj|sk|sl|sm|sn|so|sr|st|su|sv|sy|sz|tc|td|tf|tg|th|tj|tk|tm|tn|to|tp|tr|tt|tv|tw|tz|ua|ug|uk|um|us|uy|uz|va|vc|ve|vg|vi|vn|vu|wf|ws|ye|yt|yu|za|zm|zw)$|(([0-9][0-9]?|[0-1][0-9][0-9]|[2][0-4][0-9]|[2][5][0-5])\.){3}([0-9][0-9]?|[0-1][0-9][0-9]|[2][0-4][0-9]|[2][5][0-5]))$/i",$email)); } ?> Any assistance would be greatly appreciated. Thanks!

    Read the article

  • flash core engine by Dinesh [closed]

    - by hdinesh
    This post was a dump of the following code (without the highlights). No question, just a dump. Please update this q. with a real question to have it reopened. You (the asker) risk to be flagged as spammer (if not already) and a bad reputation. This is a q/a site, not a site to promote your own code libraries. package facers { import flash.display.*; import flash.events.*; import flash.geom.ColorTransform; import flash.utils.Dictionary; import org.papervision3d.cameras.*; import org.papervision3d.scenes.*; import org.papervision3d.objects.*; import org.papervision3d.objects.special.*; import org.papervision3d.objects.primitives.*; import org.papervision3d.materials.*; import org.papervision3d.events.FileLoadEvent; import org.papervision3d.materials.special.*; import org.papervision3d.materials.shaders.*; import org.papervision3d.materials.utils.*; import org.papervision3d.lights.*; import org.papervision3d.render.*; import org.papervision3d.view.*; import org.papervision3d.events.InteractiveScene3DEvent; import org.papervision3d.events.*; import org.papervision3d.core.utils.*; import org.papervision3d.core.geom.renderables.Vertex3D; import caurina.transitions.*; public class Main extends Sprite { public var viewport :BasicView; public var displayObject :DisplayObject3D; private var light :PointLight3D; private var shadowPlane :Plane; private var dataArray :Array; private var material :BitmapFileMaterial; private var planeByContainer :Dictionary = new Dictionary(); private var paperSize :Number = 0.5; private var cloudSize :Number = 1500; private var rotSize :Number = 360; private var maxAlbums :Number = 50; private var num :Number = 0; public function Main():void { trace("START APPLICATION"); viewport = new BasicView(1024, 690, true, true, CameraType.FREE); viewport.camera.zoom = 50; viewport.camera.extra = { goPosition: new DisplayObject3D(),goTarget: new DisplayObject3D() }; addChild(viewport); displayObject = new DisplayObject3D(); viewport.scene.addChild(displayObject); createAlbum(); addEventListener(Event.ENTER_FRAME, onRenderEvent); } private function createAlbum() { dataArray = new Array("images/thums/pic1.jpg", "images/thums/pic2.jpg", "images/thums/pic3.jpg", "images/thums/pic4.jpg", "images/thums/pic5.jpg", "images/thums/pic6.jpg", "images/thums/pic7.jpg", "images/thums/pic8.jpg", "images/thums/pic9.jpg", "images/thums/pic10.jpg", "images/thums/pic1.jpg", "images/thums/pic2.jpg", "images/thums/pic3.jpg", "images/thums/pic4.jpg", "images/thums/pic5.jpg", "images/thums/pic6.jpg", "images/thums/pic7.jpg", "images/thums/pic8.jpg", "images/thums/pic9.jpg", "images/thums/pic10.jpg"); for (var i:int = 0; i < dataArray.length; i++) { material = new BitmapFileMaterial(dataArray[i]); material.doubleSided = true; material.addEventListener(FileLoadEvent.LOAD_COMPLETE, loadMaterial); } } public function loadMaterial(event:Event) { var plane:Plane = new Plane(material, 300, 180); displayObject.addChild(plane); var _x:int = Math.random() * cloudSize - cloudSize/2; var _y:int = Math.random() * cloudSize - cloudSize/2; var _z:int = Math.random() * cloudSize - cloudSize/2; var _rotationX:int = Math.random() * rotSize; var _rotationY:int = Math.random() * rotSize; var _rotationZ:int = Math.random() * rotSize; Tweener.addTween(plane, { x:_x, y:_y, z:_z, rotationX:_rotationX, rotationY:_rotationY, rotationZ:_rotationZ, time:5, transition:"easeIn" } ); } protected function onRenderEvent(event:Event):void { var rotY: Number = (mouseY-(stage.stageHeight/2))/(900/2)*(1200); var rotX: Number = (mouseX-(stage.stageWidth/2))/(600/2)*(-1200); displayObject.rotationY = viewport.camera.x + (rotX - viewport.camera.x) / 50; displayObject.rotationX = viewport.camera.y + (rotY - viewport.camera.y) / 30; viewport.singleRender(); } } } package designLab.events { import flash.display.BlendMode; import flash.display.Sprite; import flash.events.Event; import flash.filters.BlurFilter; // Import designLab import designLab.layer.IntroLayer; import designLab.shadow.ShadowCaster; import designLab.utils.LayerConstant; // Import Papervision3D import org.papervision3d.cameras.*; import org.papervision3d.scenes.*; import org.papervision3d.objects.*; import org.papervision3d.objects.special.*; import org.papervision3d.objects.primitives.*; import org.papervision3d.materials.*; import org.papervision3d.materials.special.*; import org.papervision3d.materials.shaders.*; import org.papervision3d.materials.utils.*; import org.papervision3d.lights.*; import org.papervision3d.render.*; import org.papervision3d.view.*; import org.papervision3d.events.InteractiveScene3DEvent; import org.papervision3d.events.*; import org.papervision3d.core.utils.*; import org.papervision3d.core.geom.renderables.Vertex3D; public class CoreEnging extends Sprite { public var viewport :BasicView; // Create BasicView public var displayObject :DisplayObject3D; // Create DisplayObject public var shadowCaster :ShadowCaster; // Create ShadowCaster private var light :PointLight3D; // Create PointLight private var shadowPlane :Plane; // Create Plane private var layer :LayerConstant; // Create constant resource layer private static var instance :CoreEnging; // Create CoreEnging class static instance // CoreEnging class static instance mathod function public static function getinstance() { if (instance != null) return instance; else { instance = new CoreEnging(); return instance; } } // CoreEnging constrictor public function CoreEnging () { trace("INFO: Design Lab Application : Core Enging v0.1"); layer = new LayerConstant(); viewport = new BasicView(900, 600, true, true, CameraType.FREE); // pass the width, height, scaleToStage, interactive, cameraType to BasicView viewport.camera.zoom = 100; // Define the zoom level of camera addChild(viewport); createFloor(); // Create the floor displayObject = new DisplayObject3D(); // Create new instance of DisplayObject viewport.scene.addChild(displayObject); // Add the DisplayObject to the BasicView light = new PointLight3D(); // Create new instance of PointLight light.z = -50; // Position the Z of create instance light.x = 0; //Position the X of create instance light.rotationZ = 45; //Position the rotation angel of the Z of create instance light.y = 500; //Position the Y of create instance shadowCaster = new ShadowCaster("shadow", 0x000000, BlendMode.MULTIPLY, .1, [new BlurFilter(20, 20, 1)]); // pass shadowcaster name, color, blend mode, alpha and filters shadowCaster.setType(ShadowCaster.SPOTLIGHT); // Define the shadow type addEventListener(Event.ENTER_FRAME, onRenderEvent); // Add frame render event } // Start create floor public function createFloor() { var spr:Sprite = new Sprite(); // Create Sprite spr.graphics.beginFill(0xFFFFFF); // Define the fill color for sprite spr.graphics.drawRect(0, 0, 600, 600); // Define the X, Y, width, height of the sprite var sprMaterial:MovieMaterial = new MovieMaterial(spr, true, true, true); //Create a texture from an existing sprite instance shadowPlane = new Plane(sprMaterial, 2000, 2000, 1, 1); // create new instance of the Plane and pass the texture material, width, height, segmentsW and segmentsH shadowPlane.rotationX = 80; //Position the rotation angel of the X of Plane shadowPlane.y = -200; //Position the Y of Plane viewport.scene.addChild(shadowPlane); // Add the Plane to the BasicView } // switch method function of the page layer control public function addLayer(type:String) { switch (type) { case layer.INTRO: var intro:IntroLayer = new IntroLayer(); break; } } // Create get mathod function for DisplayObject public function getDisplayObject():DisplayObject3D { return displayObject; } // Create get mathod function for BasicView public function getViewport():BasicView { return viewport; } // Rendering function protected function onRenderEvent(event:Event):void { var rotY: Number = (mouseY-(stage.stageHeight/2))/(900/2)*(1200); var rotX: Number = (mouseX-(stage.stageWidth/2))/(600/2)*(-1200); displayObject.rotationY = viewport.camera.x + (rotX - viewport.camera.x) / 50; displayObject.rotationX = viewport.camera.y + (rotY - viewport.camera.y) / 30; // Remove the shadow shadowCaster.invalidate(); // create new shadow on DisplayObject move shadowCaster.castModel(displayObject, light, shadowPlane); viewport.singleRender(); } } } package designLab.layer { import flash.display.Sprite; import flash.events.Event; // Import designLab import designLab.materials.iBusinessCard; import designLab.events.CoreEnging; // Import Papervision3D import org.papervision3d.objects.primitives.Cube; import org.papervision3d.materials.ColorMaterial; import org.papervision3d.materials.MovieMaterial; public class IntroLayer { // IntroLayer constrictor public function IntroLayer() { trace("INFO: Load Intro layer"); var indexDP:DP_index = new DP_index(); //Create the library MovieClip var blackMaterial:MovieMaterial = new MovieMaterial(indexDP, true); //Create a texture from an existing library MovieClip instance blackMaterial.smooth = true; blackMaterial.doubleSided = false; var mycolor:ColorMaterial = new ColorMaterial(0x000000); //Create solid color material var mycard:iBusinessCard = new iBusinessCard(blackMaterial, blackMaterial, mycolor, 372, 10, 207); // Create custom 3D cube object to pass the Front, Back, All, CubeWidth, CubeDepth and CubeHeight CoreEnging.getinstance().getDisplayObject().addChild(mycard.create3DCube()); // Add the custom 3D cube to the DisplayObject } } } package designLab.materials { import flash.display.*; import flash.events.*; // Import Papervision3D import org.papervision3d.materials.*; import org.papervision3d.materials.utils.MaterialsList; import org.papervision3d.objects.primitives.Cube; public class iBusinessCard extends Sprite { private var materialsList :MaterialsList; private var cube :Cube; private var Front :MovieMaterial = new MovieMaterial(); private var Back :MovieMaterial = new MovieMaterial(); private var All :ColorMaterial = new ColorMaterial(); private var CubeWidth :Number; private var CubeDepth :Number; private var CubeHeight :Number; public function iBusinessCard(Front:MovieMaterial, Back:MovieMaterial, All:ColorMaterial, CubeWidth:Number, CubeDepth:Number, CubeHeight:Number) { setFront(Front); setBack(Back); setAll(All); setCubeWidth(CubeWidth); setCubeDepth(CubeDepth); setCubeHeight(CubeHeight); } public function create3DCube():Cube { materialsList = new MaterialsList(); materialsList.addMaterial(Front, "front"); materialsList.addMaterial(Back, "back"); materialsList.addMaterial(All, "left"); materialsList.addMaterial(All, "right"); materialsList.addMaterial(All, "top"); materialsList.addMaterial(All, "bottom"); cube = new Cube(materialsList, CubeWidth, CubeDepth, CubeHeight); cube.x = 0; cube.y = 0; cube.z = 0; cube.rotationY = 180; return cube; } public function setFront(Front:MovieMaterial) { this.Front = Front; } public function getFront():MovieMaterial { return Front; } public function setBack(Back:MovieMaterial) { this.Back = Back; } public function getBack():MovieMaterial { return Back; } public function setAll(All:ColorMaterial) { this.All = All; } public function getAll():ColorMaterial { return All; } public function setCubeWidth(CubeWidth:Number) { this.CubeWidth = CubeWidth; } public function getCubeWidth():Number { return CubeWidth; } public function setCubeDepth(CubeDepth:Number) { this.CubeDepth = CubeDepth; } public function getCubeDepth():Number { return CubeDepth; } public function setCubeHeight(CubeHeight:Number) { this.CubeHeight = CubeHeight; } public function getCubeHeight():Number { return CubeHeight; } } } package designLab.shadow { import flash.display.Sprite; import flash.filters.BlurFilter; import flash.geom.Point; import flash.geom.Rectangle; import flash.utils.Dictionary; import org.papervision3d.core.geom.TriangleMesh3D; import org.papervision3d.core.geom.renderables.Triangle3D; import org.papervision3d.core.geom.renderables.Vertex3D; import org.papervision3d.core.math.BoundingSphere; import org.papervision3d.core.math.Matrix3D; import org.papervision3d.core.math.Number3D; import org.papervision3d.core.math.Plane3D; import org.papervision3d.lights.PointLight3D; import org.papervision3d.materials.MovieMaterial; import org.papervision3d.objects.DisplayObject3D; import org.papervision3d.objects.primitives.Plane; public class ShadowCaster { private var vertexRefs:Dictionary; private var numberRefs:Dictionary; private var lightRay:Number3D = new Number3D() private var p3d:Plane3D = new Plane3D(); public var color:uint = 0; public var alpha:Number = 0; public var blend:String = ""; public var filters:Array; public var uid:String; private var _type:String = "point"; private var dir:Number3D; private var planeBounds:Dictionary; private var targetBounds:Dictionary; private var models:Dictionary; public static var DIRECTIONAL:String = "dir"; public static var SPOTLIGHT:String = "spot"; public function ShadowCaster(uid:String, color:uint = 0, blend:String = "multiply", alpha:Number = 1, filters:Array=null) { this.uid = uid; this.color = color; this.alpha = alpha; this.blend = blend; this.filters = filters ? filters : [new BlurFilter()]; numberRefs = new Dictionary(true); targetBounds = new Dictionary(true); planeBounds = new Dictionary(true); models = new Dictionary(true); } public function castModel(model:DisplayObject3D, light:PointLight3D, plane:Plane, faces:Boolean = true, cull:Boolean = false):void{ var ar:Array; if(models[model]) { ar = models[model]; }else{ ar = new Array(); getChildMesh(model, ar); models[model] = ar; } var reset:Boolean = true; for each(var t:TriangleMesh3D in ar){ if(faces) castFaces(light, t, plane, cull, reset); else castBoundingSphere(light, t, plane, 0.75, reset); reset = false; } } private function getChildMesh(do3d:DisplayObject3D, ar):void{ if(do3d is TriangleMesh3D) ar.push(do3d); for each(var d:DisplayObject3D in do3d.children) getChildMesh(d, ar); } public function setType(type:String="point"):void{ _type = type; } public function getType():String{ return _type; } public function castBoundingSphere(light:PointLight3D, target:TriangleMesh3D, plane:Plane, scaleRadius:Number=0.8, clear:Boolean = true):void{ var planeVertices:Array = plane.geometry.vertices; //convert to target space? var world:Matrix3D = plane.world; var inv:Matrix3D = Matrix3D.inverse(plane.transform); var lp:Number3D = new Number3D(light.x, light.y, light.z); Matrix3D.multiplyVector(inv, lp); p3d.setNormalAndPoint(plane.geometry.faces[0].faceNormal, new Number3D()); var b:BoundingSphere = target.geometry.boundingSphere; var bounds:Object = planeBounds[plane]; if(!bounds){ bounds = plane.boundingBox(); planeBounds[plane] = bounds; } var tbounds:Object = targetBounds[target]; if(!tbounds){ tbounds = target.boundingBox(); targetBounds[target] = tbounds; } var planeMovie:Sprite = Sprite(MovieMaterial(plane.material).movie); var movieSize:Point = new Point(planeMovie.width, planeMovie.height); var castClip:Sprite = getCastClip(plane); castClip.blendMode = this.blend; castClip.filters = this.filters; castClip.alpha = this.alpha; if(clear) castClip.graphics.clear(); vertexRefs = new Dictionary(true); var tlp:Number3D = new Number3D(light.x, light.y, light.z); Matrix3D.multiplyVector(Matrix3D.inverse(target.world), tlp); var center:Number3D = new Number3D(tbounds.min.x+tbounds.size.x*0.5, tbounds.min.y+tbounds.size.y*0.5, tbounds.min.z+tbounds.size.z*0.5); var dif:Number3D = Number3D.sub(lp, center); dif.normalize(); var other:Number3D = new Number3D(); other.x = -dif.y; other.y = dif.x; other.z = 0; other.normalize(); var cross:Number3D = Number3D.cross(new Number3D(plane.transform.n12, plane.transform.n22, plane.transform.n32), p3d.normal); cross.normalize(); //cross = new Number3D(-dif.y, dif.x, 0); //cross.normalize(); cross.multiplyEq(b.radius*scaleRadius); if(_type == DIRECTIONAL){ var oPos:Number3D = new Number3D(target.x, target.y, target.z); Matrix3D.multiplyVector(target.world, oPos); Matrix3D.multiplyVector(inv, oPos); dir = new Number3D(oPos.x-lp.x, oPos.y-lp.y, oPos.z-lp.z); } //numberRefs = new Dictionary(true); var pos:Number3D; var c2d:Point; var r2d:Point; //_type = SPOTLIGHT; pos = projectVertex(new Vertex3D(center.x, center.y, center.z), lp, inv, target.world); c2d = get2dPoint(pos, bounds.min, bounds.size, movieSize); pos = projectVertex(new Vertex3D(center.x+cross.x, center.y+cross.y, center.z+cross.z), lp, inv, target.world); r2d = get2dPoint(pos, bounds.min, bounds.size, movieSize); var dx:Number = r2d.x-c2d.x; var dy:Number = r2d.y-c2d.y; var rad:Number = Math.sqrt(dx*dx+dy*dy); castClip.graphics.beginFill(color); castClip.graphics.moveTo(c2d.x, c2d.y); castClip.graphics.drawCircle(c2d.x, c2d.y, rad); castClip.graphics.endFill(); } public function getCastClip(plane:Plane):Sprite{ var planeMovie:Sprite = Sprite(MovieMaterial(plane.material).movie); var movieSize:Point = new Point(planeMovie.width, planeMovie.height); var castClip:Sprite;// = new Sprite(); if(planeMovie.getChildByName("castClip"+uid)) return Sprite(planeMovie.getChildByName("castClip"+uid)); else{ castClip = new Sprite(); castClip.name = "castClip"+uid; castClip.scrollRect = new Rectangle(0, 0, movieSize.x, movieSize.y); //castClip.alpha = 0.4; planeMovie.addChild(castClip); return castClip; } } public function castFaces(light:PointLight3D, target:TriangleMesh3D, plane:Plane, cull:Boolean=false, clear:Boolean = true):void{ var planeVertices:Array = plane.geometry.vertices; //convert to target space? var world:Matrix3D = plane.world; var inv:Matrix3D = Matrix3D.inverse(plane.transform); var lp:Number3D = new Number3D(light.x, light.y, light.z); Matrix3D.multiplyVector(inv, lp); var tlp:Number3D; if(cull){ tlp = new Number3D(light.x, light.y, light.z); Matrix3D.multiplyVector(Matrix3D.inverse(target.world), tlp); } //Matrix3D.multiplyVector(Matrix3D.inverse(target.transform), tlp); //p3d.setThreePoints(planeVertices[0].getPosition(), planeVertices[1].getPosition(), planeVertices[2].getPosition()); p3d.setNormalAndPoint(plane.geometry.faces[0].faceNormal, new Number3D()); if(_type == DIRECTIONAL){ var oPos:Number3D = new Number3D(target.x, target.y, target.z); Matrix3D.multiplyVector(target.world, oPos); Matrix3D.multiplyVector(inv, oPos); dir = new Number3D(oPos.x-lp.x, oPos.y-lp.y, oPos.z-lp.z); } var bounds:Object = planeBounds[plane]; if(!bounds){ bounds = plane.boundingBox(); planeBounds[plane] = bounds; } var castClip:Sprite = getCastClip(plane); castClip.blendMode = this.blend; castClip.filters = this.filters; castClip.alpha = this.alpha; var planeMovie:Sprite = Sprite(MovieMaterial(plane.material).movie); var movieSize:Point = new Point(planeMovie.width, planeMovie.height); if(clear) castClip.graphics.clear(); vertexRefs = new Dictionary(true); //numberRefs = new Dictionary(true); var pos:Number3D; var p2d:Point; var s2d:Point; var hitVert:Number3D = new Number3D(); for each(var t:Triangle3D in target.geometry.faces){ if( cull){ hitVert.x = t.v0.x; hitVert.y = t.v0.y; hitVert.z = t.v0.z; if(Number3D.dot(t.faceNormal, Number3D.sub(tlp, hitVert)) <= 0) continue; } castClip.graphics.beginFill(color); pos = projectVertex(t.v0, lp, inv, target.world); s2d = get2dPoint(pos, bounds.min, bounds.size, movieSize); castClip.graphics.moveTo(s2d.x, s2d.y); pos = projectVertex(t.v1, lp, inv, target.world); p2d = get2dPoint(pos, bounds.min, bounds.size, movieSize); castClip.graphics.lineTo(p2d.x, p2d.y); pos = projectVertex(t.v2, lp, inv, target.world); p2d = get2dPoint(pos, bounds.min, bounds.size, movieSize); castClip.graphics.lineTo(p2d.x, p2d.y); castClip.graphics.lineTo(s2d.x, s2d.y); castClip.graphics.endFill(); } } public function invalidate():void{ invalidateModels(); invalidatePlanes(); } public function invalidatePlanes():void{ planeBounds = new Dictionary(true); } public function invalidateTargets():void{ numberRefs = new Dictionary(true); targetBounds = new Dictionary(true); } public function invalidateModels():void{ models = new Dictionary(true); invalidateTargets(); } private function get2dPoint(pos3D:Number3D, min3D:Number3D, size3D:Number3D, movieSize:Point):Point{ return new Point((pos3D.x-min3D.x)/size3D.x*movieSize.x, ((-pos3D.y-min3D.y)/size3D.y*movieSize.y)); } private function projectVertex(v:Vertex3D, light:Number3D, invMat:Matrix3D, world:Matrix3D):Number3D{ var pos:Number3D = vertexRefs[v]; if(pos) return pos; var n:Number3D = numberRefs[v]; if(!n){ n = new Number3D(v.x, v.y, v.z); Matrix3D.multiplyVector(world, n); Matrix3D.multiplyVector(invMat, n); numberRefs[v] = n; } if(_type == SPOTLIGHT){ lightRay.x = light.x; lightRay.y = light.y; lightRay.z = light.z; }else{ lightRay.x = n.x-dir.x; lightRay.y = n.y-dir.y; lightRay.z = n.z-dir.z; } pos = p3d.getIntersectionLineNumbers(lightRay, n); vertexRefs[v] = pos; return pos; } } } package designLab.utils { public class LayerConstant { public const INTRO:String = "INTRO"; // Intro layer string constant } }*emphasized text*

    Read the article

  • Postgres cannot connect to server

    - by user1408935
    Super stumped by why Postgres isn't working on a new app I just started. I've got it working for one app already. I'm using postgres.app, and it's running. I started a new app with rails new depot -d postgresql and then I went into the database.yml file and changed username to my $USER (which is what it is for the other app, which is working). So now my database.yml file has this development section: development: adapter: postgresql encoding: unicode database: depot_development pool: 5 username: <username> password: But when I run "rake db:create" or "rake db:create:all" I still got this error (in full, cause I don't know what's relevant): Couldn't create database for {"adapter"=>"postgresql", "encoding"=>"unicode", "database"=>"depot_development", "pool"=>5, "username"=>"<username>", "password"=>nil} could not connect to server: Permission denied Is the server running locally and accepting connections on Unix domain socket "/var/pgsql_socket/.s.PGSQL.5432"? /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/postgresql_adapter.rb:1213:in `initialize' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/postgresql_adapter.rb:1213:in `new' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/postgresql_adapter.rb:1213:in `connect' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/postgresql_adapter.rb:329:in `initialize' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/postgresql_adapter.rb:28:in `new' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/postgresql_adapter.rb:28:in `postgresql_connection' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:309:in `new_connection' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:319:in `checkout_new_connection' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:241:in `block (2 levels) in checkout' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:236:in `loop' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:236:in `block in checkout' /Users/<username>/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/monitor.rb:211:in `mon_synchronize' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:233:in `checkout' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:96:in `block in connection' /Users/<username>/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/monitor.rb:211:in `mon_synchronize' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:95:in `connection' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:404:in `retrieve_connection' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_specification.rb:170:in `retrieve_connection' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_specification.rb:144:in `connection' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/railties/databases.rake:107:in `rescue in create_database' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/railties/databases.rake:51:in `create_database' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/railties/databases.rake:40:in `block (3 levels) in <top (required)>' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/railties/databases.rake:40:in `each' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/railties/databases.rake:40:in `block (2 levels) in <top (required)>' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/task.rb:205:in `call' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/task.rb:205:in `block in execute' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/task.rb:200:in `each' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/task.rb:200:in `execute' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/task.rb:158:in `block in invoke_with_call_chain' /Users/<username>/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/monitor.rb:211:in `mon_synchronize' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/task.rb:151:in `invoke_with_call_chain' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/task.rb:144:in `invoke' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/application.rb:116:in `invoke_task' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/application.rb:94:in `block (2 levels) in top_level' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/application.rb:94:in `each' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/application.rb:94:in `block in top_level' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/application.rb:133:in `standard_exception_handling' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/application.rb:88:in `top_level' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/application.rb:66:in `block in run' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/application.rb:133:in `standard_exception_handling' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/application.rb:63:in `run' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/bin/rake:33:in `<top (required)>' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/bin/rake:19:in `load' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/bin/rake:19:in `<main>' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/bin/ruby_noexec_wrapper:14:in `eval' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/bin/ruby_noexec_wrapper:14:in `<main>' Couldn't create database for {"adapter"=>"postgresql", "encoding"=>"unicode", "database"=>"depot_test", "pool"=>5, "username"=>"<username>", "password"=>nil} I have tried createdb depot_development I have tried going into the psql environment and listing users (which included my username among them). In the same psql environment, I tried CREATE DATABASE depot; I've made sure that the pg gem is installed with bundle install, I've run "pg_ctl start", to which I got this response: pg_ctl: no database directory specified and environment variable PGDATA unset I ran "ps aux | grep postgres" to make sure postgres was running, to which I got this in return (which looks like it's doing OK, right?): <username> 10390 0.4 0.0 2425480 180 s000 R+ 6:15PM 0:00.00 grep postgres <username> 2907 0.0 0.0 2441604 464 ?? Ss 6:17PM 0:02.31 postgres: stats collector process <username> 2906 0.0 0.0 2445520 1664 ?? Ss 6:17PM 0:02.33 postgres: autovacuum launcher process <username> 2905 0.0 0.0 2445388 600 ?? Ss 6:17PM 0:09.25 postgres: wal writer process <username> 2904 0.0 0.0 2445388 1252 ?? Ss 6:17PM 0:12.08 postgres: writer process <username> 2902 0.0 0.0 2445388 3688 ?? S 6:17PM 0:00.54 /Applications/Postgres.app/Contents/MacOS/bin/postgres -D /Users/<username>/Library/Application Support/Postgres/var -p5432 The short of it, is I've been troubleshooting for a WHILE and have NO idea what's wrong. Any ideas? I'd really appreciate it, cause I'm pretty new to Rails, and this is a pretty disheartening roadblock. Thanks! EDIT -- Per request, posting the successful database.yml . It seems the difference is the inclusion of a password: development: adapter: postgresql encoding: unicode database: *******_development pool: 5 username: ******* password: ******* EDIT2 -- When I add a password to the .yml file, then run rake db:create again, I get this error. rake aborted! No Rakefile found (looking for: rakefile, Rakefile, rakefile.rb, Rakefile.rb)

    Read the article

  • storing session data in mysql using php is not retrieving the data properly from the tables.

    - by Ronedog
    I have a problem retrieving some data from the $_SESSION using php and mysql. I've commented out the line in php.ini that tells the server to use the "file" to store the session info so my database will be used. I have a class that I use to write the information to the database and its working fine. When the user passes their credentials the class gets instantiated and the $_SESSION vars get set, then the user gets redirected to the index page. The index.php page includes the file where the db session class is, which when instantiated calles session_start() and the session variables should be in $_SESSION, but when I do var_dump($_SESSION) there is nothing in the array. However, when I look at the data in mysql, all the session information is in there. Its acting like session_start() has not been called, but by instantiating the class it is. Any idea what could be wrong? Here's the HTML: <?php include_once "classes/phpsessions_db/class.dbsession.php"; //used for sessions var_dump($_SESSION); ?> <html> . . . </html> Here's the dbsession class: <?php error_reporting(E_ALL); class dbSession { function dbSession($gc_maxlifetime = "", $gc_probability = "", $gc_divisor = "") { // if $gc_maxlifetime is specified and is an integer number if ($gc_maxlifetime != "" && is_integer($gc_maxlifetime)) { // set the new value @ini_set('session.gc_maxlifetime', $gc_maxlifetime); } // if $gc_probability is specified and is an integer number if ($gc_probability != "" && is_integer($gc_probability)) { // set the new value @ini_set('session.gc_probability', $gc_probability); } // if $gc_divisor is specified and is an integer number if ($gc_divisor != "" && is_integer($gc_divisor)) { // set the new value @ini_set('session.gc_divisor', $gc_divisor); } // get session lifetime $this->sessionLifetime = ini_get("session.gc_maxlifetime"); //Added by AARON. cancel the session's auto start,important, without this the session var's don't show up on next pg. session_write_close(); // register the new handler session_set_save_handler( array(&$this, 'open'), array(&$this, 'close'), array(&$this, 'read'), array(&$this, 'write'), array(&$this, 'destroy'), array(&$this, 'gc') ); register_shutdown_function('session_write_close'); // start the session @session_start(); } function stop() { $new_sess_id = $this->regenerate_id(true); session_unset(); session_destroy(); return $new_sess_id; } function regenerate_id($return_val=false) { // saves the old session's id $oldSessionID = session_id(); // regenerates the id // this function will create a new session, with a new id and containing the data from the old session // but will not delete the old session session_regenerate_id(); // because the session_regenerate_id() function does not delete the old session, // we have to delete it manually //$this->destroy($oldSessionID); //ADDED by aaron // returns the new session id if($return_val) { return session_id(); } } function open($save_path, $session_name) { // global $gf; // $gf->debug_this($gf, "GF: Opening Session"); // change the next values to match the setting of your mySQL database $mySQLHost = "localhost"; $mySQLUsername = "user"; $mySQLPassword = "pass"; $mySQLDatabase = "sessions"; $link = mysql_connect($mySQLHost, $mySQLUsername, $mySQLPassword); if (!$link) { die ("Could not connect to database!"); } $dbc = mysql_select_db($mySQLDatabase, $link); if (!$dbc) { die ("Could not select database!"); } return true; } function close() { mysql_close(); return true; } function read($session_id) { $result = @mysql_query(" SELECT session_data FROM session_data WHERE session_id = '".$session_id."' AND http_user_agent = '".$_SERVER["HTTP_USER_AGENT"]."' AND session_expire > '".time()."' "); // if anything was found if (is_resource($result) && @mysql_num_rows($result) > 0) { // return found data $fields = @mysql_fetch_assoc($result); // don't bother with the unserialization - PHP handles this automatically return unserialize($fields["session_data"]); } // if there was an error return an empty string - this HAS to be an empty string return ""; } function write($session_id, $session_data) { // global $gf; // first checks if there is a session with this id $result = @mysql_query(" SELECT * FROM session_data WHERE session_id = '".$session_id."' "); // if there is if (@mysql_num_rows($result) > 0) { // update the existing session's data // and set new expiry time $result = @mysql_query(" UPDATE session_data SET session_data = '".serialize($session_data)."', session_expire = '".(time() + $this->sessionLifetime)."' WHERE session_id = '".$session_id."' "); // if anything happened if (@mysql_affected_rows()) { // return true return true; } } else // if this session id is not in the database { // $gf->debug_this($gf, "inside dbSession, trying to write to db because session id was NOT in db"); $sql = " INSERT INTO session_data ( session_id, http_user_agent, session_data, session_expire ) VALUES ( '".serialize($session_id)."', '".$_SERVER["HTTP_USER_AGENT"]."', '".$session_data."', '".(time() + $this->sessionLifetime)."' ) "; // insert a new record $result = @mysql_query($sql); // if anything happened if (@mysql_affected_rows()) { // return an empty string return ""; } } // if something went wrong, return false return false; } function destroy($session_id) { // deletes the current session id from the database $result = @mysql_query(" DELETE FROM session_data WHERE session_id = '".$session_id."' "); // if anything happened if (@mysql_affected_rows()) { // return true return true; } // if something went wrong, return false return false; } function gc($maxlifetime) { // it deletes expired sessions from database $result = @mysql_query(" DELETE FROM session_data WHERE session_expire < '".(time() - $maxlifetime)."' "); } } //End of Class $session = new dbsession(); ?>

    Read the article

  • Troubleshooting Windows Authentication problems (no challenge) in IIS 7.5?

    - by Aaronaught
    I know that there are thousands of reports of people having trouble getting Integrated Windows Authentication to work with IIS, but they all seem to lead to web pages that don't apply or solutions that I've already tried. I've deployed dozens of sites like this before, so either there's something bizarre going on with the server/configuration, or I've been looking at this too long and not seeing the obvious. Simply put, everything works perfectly on my local machine, but falls apart on the production server, which as far as I can tell has the exact same configuration. On the local machine: The machine is running Windows 7 Ultimate, Service Pack 1, IIS 7.5. The site has been tested successfully, using both IIS and the VS Web Development Server. The IIS site config has all authentication methods disabled except Windows Authentication. The local machine is not on any domain. The Providers set up are Negotiate and NTLM (not Negotiate:Kerberos). Extended Protection is Off. All browsers tested (IE, Firefox, Chrome) show the challenge prompt and allow me to log in to the localhost domain with my (local) Windows account. All browsers tested also work using an opaque local IP address - so the browsers themselves don't seem to care whether the site appears "local" or "remote". I've added a display line to the web page which shows the currently-logged-in user and it shows exactly what I would expect (whichever local user I logged in with). On the remote machine: The server is running Windows Server 2008 R2, IIS 7.5. Loading the web page results in an immediate 401.2 error: You are not authorized to view this page due to invalid authentication headers. No challenge prompt ever appears. The IIS site config has all authentication methods disabled except Windows Authentication. The remote machine is not on any domain. The Providers set up are Negotiate and NTLM (not Negotiate:Kerberos). Extended Protection is Off. On the remote machine (remote desktop session), the same error appears in Internet Explorer regardless of whether the domain is localhost or the external IP address. If I try to view the remote web site from my local machine, the error is still 401, but a slightly different 401. No subcode, with the text: Access is denied due to invalid credentials. The Windows Authentication IIS role feature is installed. The WindowsAuthentication Module is added (at the Server level). The exact same error occurs if I turn off Windows Authentication and enable Basic Authentication. The site does load if I turn off Windows Authentication and enable Anonymous (obviously). I've already followed all of the troubleshooting steps on Microsoft Support: Troubleshooting HTTP 401 errors in IIS I've already tried the workaround shown on another Microsoft support page (supposedly to force NTLM as the only method). Last but not least, I tried turning on FREB for 401.2 errors and the results don't seem to tell me anything useful, all I see is the following warning: MODULE_SET_RESPONSE_ERROR_STATUS ModuleName IIS Web Core Notification 2 HttpStatus 401 HttpReason Unauthorized HttpSubStatus 2 ErrorCode 2147942405 ConfigExceptionInfo Notification AUTHENTICATE_REQUEST ErrorCode Access is denied. (0x80070005) ...this seems to just be telling me what I already know (that it's simply rejecting the request instead of negotiating the credentials). The trace does indicate that the WindowsAuthentication module is correctly loaded because there is a NOTIFY_MODULE_START line with ModuleName = WindowsAuthentication (and various other ASP.NET follow-up events - [un]fortunately, no interesting errors or warnings here). Can anyone tell me what I might be missing here? Quick Update: I'm a little uncomfortable sending a whole Wireshark dump as it would reveal IPs, URLs and other stuff, but I did a side-by-side comparison of the HTTP responses from localhost and the remote server in Fiddler, and it seems fairly self-evident what the problem is: Localhost: HTTP/1.1 401 Unauthorized Cache-Control: private Content-Type: text/html; charset=utf-8 Server: Microsoft-IIS/7.5 WWW-Authenticate: Negotiate WWW-Authenticate: NTLM X-Powered-By: ASP.NET Date: Sat, 17 Dec 2011 23:42:34 GMT Content-Length: 6399 Proxy-Support: Session-Based-Authentication Remote: HTTP/1.1 401 Unauthorized Content-Type: text/html Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET Date: Sat, 17 Dec 2011 23:43:13 GMT Content-Length: 1293 Aside from a few seemingly-inconsequential differences like cache-control, the main difference is that the remote server is not sending the WWW-Authenticate headers back to the client. So, I guess that narrows the question down to: Why is IIS not sending WWW-Authenticate headers when Windows Authentication appears to be installed, loaded, and exclusively enabled?

    Read the article

  • DNS server not functioning correctly

    - by Shamit Shrestha
    I have setup a DNS server which isnt working properly. My domain is accswift.com which has glued to two name servers ns1.accswift.com and ns2.accswift.com for the same IP address - 203.78.164.18. On domain end everything should be fine. Please check -http://www.intodns.com/accswift.com I am sure its the problem with the linux server. Can anyone help me find where the problem is for me? Below is the settings that I have in the server. ====================== DIG [root@accswift ~]# dig accswift.com ; << DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6_4.6 << accswift.com ;; global options: +cmd ;; Got answer: ;; -HEADER<<- opcode: QUERY, status: NOERROR, id: 11275 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;accswift.com. IN A ;; ANSWER SECTION: accswift.com. 38400 IN A 203.78.164.18 ;; AUTHORITY SECTION: accswift.com. 38400 IN NS ns1.accswift.com. accswift.com. 38400 IN NS ns2.accswift.com. ;; ADDITIONAL SECTION: ns1.accswift.com. 38400 IN A 203.78.164.18 ns2.accswift.com. 38400 IN A 203.78.164.18 ;; Query time: 1 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Wed Nov 6 20:12:16 2013 ;; MSG SIZE rcvd: 114 ============== IP Tables settings vi /etc/sysconfig/iptables *filter :FORWARD ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A FORWARD -o eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_OUT: -A FORWARD -i eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_IN: -A OUTPUT -o eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_OUT: -A INPUT -i eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_IN: -A INPUT -p udp -m udp --sport 53 -j ACCEPT -A OUTPUT -p udp -m udp --dport 53 -j ACCEPT COMMIT Completed on Fri Sep 20 04:20:33 2013 Generated by webmin *mangle :FORWARD ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :PREROUTING ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] COMMIT Completed Generated by webmin *nat :OUTPUT ACCEPT [0:0] :PREROUTING ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] COMMIT ====DNS settings vi /var/named/accswift.com.host $ttl 38400 @ IN SOA ns1.accswift.com. root.ns1.accswift.com. ( 1382936091 10800 3600 604800 38400 ) @ IN NS ns1.accswift.com. @ IN NS ns2.accswift.com. accswift.com. IN A 203.78.164.18 accswift.com. IN NS ns1.accswift.com. www.accswift.com. IN A 203.78.164.18 ftp.accswift.com. IN A 203.78.164.18 m.accswift.com. IN A 203.78.164.18 ns1 IN A 203.78.164.18 ns2 IN A 203.78.164.18 localhost.accswift.com. IN A 127.0.0.1 webmail.accswift.com. IN A 203.78.164.18 admin.accswift.com. IN A 203.78.164.18 mail.accswift.com. IN A 203.78.164.18 accswift.com. IN MX 5 mail.accswift.com. ====Named.conf vi /etc/named.conf options { listen-on port 53 { 127.0.0.1; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { any; }; recursion yes; allow-recursion { localhost; 192.168.2.0/24; }; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; forward first; forwarders {192.168.1.1;}; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; zone "." IN { type hint; file "named.ca"; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; zone "accswift.com" { type master; file "/var/named/accswift.com.hosts"; allow-transfer { 127.0.0.1; localnets; 208.73.211.69; }; }; zone "ns1.accswift.com" { type master; file "/var/named/ns1.accswift.com.hosts"; }; ==================================== Can anybody find any flaw in this? I am still unable to reach accswift.com from any other ISP. But it is browsable from the same network though. Thanks in advance.

    Read the article

  • video and file caching with squid lusca?

    - by moon
    hello all i have configured squid lusca on ubuntu 11.04 version and also configured the video caching but the problem is the squid cannot configure the video more than 2 min long and the file of size upto 5.xx mbs only. here is my config please guide me how can i cache the long videos and files with squid: > # PORT and Transparent Option http_port 8080 transparent server_http11 on icp_port 0 > > # Cache Directory , modify it according to your system. > # but first create directory in root by mkdir /cache1 > # and then issue this command chown proxy:proxy /cache1 > # [for ubuntu user is proxy, in Fedora user is SQUID] > # I have set 500 MB for caching reserved just for caching , > # adjust it according to your need. > # My recommendation is to have one cache_dir per drive. zzz > > #store_dir_select_algorithm round-robin cache_dir aufs /cache1 500 16 256 cache_replacement_policy heap LFUDA memory_replacement_policy heap > LFUDA > > # If you want to enable DATE time n SQUID Logs,use following emulate_httpd_log on logformat squid %tl %6tr %>a %Ss/%03Hs %<st %rm > %ru %un %Sh/%<A %mt log_fqdn off > > # How much days to keep users access web logs > # You need to rotate your log files with a cron job. For example: > # 0 0 * * * /usr/local/squid/bin/squid -k rotate logfile_rotate 14 debug_options ALL,1 cache_access_log /var/log/squid/access.log > cache_log /var/log/squid/cache.log cache_store_log > /var/log/squid/store.log > > #I used DNSAMSQ service for fast dns resolving > #so install by using "apt-get install dnsmasq" first dns_nameservers 127.0.0.1 101.11.11.5 ftp_user anonymous@ ftp_list_width 32 ftp_passive on ftp_sanitycheck on > > #ACL Section acl all src 0.0.0.0/0.0.0.0 acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl > to_localhost dst 127.0.0.0/8 acl SSL_ports port 443 563 # https, snews > acl SSL_ports port 873 # rsync acl Safe_ports port 80 # http acl > Safe_ports port 21 # ftp acl Safe_ports port 443 563 # https, snews > acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl > Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port > 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port > 591 # filemaker acl Safe_ports port 777 # multiling http acl > Safe_ports port 631 # cups acl Safe_ports port 873 # rsync acl > Safe_ports port 901 # SWAT acl purge method PURGE acl CONNECT method > CONNECT http_access allow manager localhost http_access deny manager > http_access allow purge localhost http_access deny purge http_access > deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow > localhost http_access allow all http_reply_access allow all icp_access > allow all > > #========================== > # Administrative Parameters > #========================== > > # I used UBUNTU so user is proxy, in FEDORA you may use use squid cache_effective_user proxy cache_effective_group proxy cache_mgr > [email protected] visible_hostname proxy.aacable.net unique_hostname > [email protected] > > #============= > # ACCELERATOR > #============= half_closed_clients off quick_abort_min 0 KB quick_abort_max 0 KB vary_ignore_expire on reload_into_ims on log_fqdn > off memory_pools off > > # If you want to hide your proxy machine from being detected at various site use following via off > > #============================================ > # OPTIONS WHICH AFFECT THE CACHE SIZE / zaib > #============================================ > # If you have 4GB memory in Squid box, we will use formula of 1/3 > # You can adjust it according to your need. IF squid is taking too much of RAM > # Then decrease it to 128 MB or even less. > > cache_mem 256 MB minimum_object_size 512 bytes maximum_object_size 500 > MB maximum_object_size_in_memory 128 KB > > #============================================================$ > # SNMP , if you want to generate graphs for SQUID via MRTG > #============================================================$ > #acl snmppublic snmp_community gl > #snmp_port 3401 > #snmp_access allow snmppublic all > #snmp_access allow all > > #============================================================ > # ZPH , To enable cache content to be delivered at full lan speed, > # To bypass the queue at MT. > #============================================================ tcp_outgoing_tos 0x30 all zph_mode tos zph_local 0x30 zph_parent 0 > zph_option 136 > > # Caching Youtube acl videocache_allow_url url_regex -i \.youtube\.com\/get_video\? acl videocache_allow_url url_regex -i > \.youtube\.com\/videoplayback \.youtube\.com\/videoplay > \.youtube\.com\/get_video\? acl videocache_allow_url url_regex -i > \.youtube\.[a-z][a-z]\/videoplayback \.youtube\.[a-z][a-z]\/videoplay > \.youtube\.[a-z][a-z]\/get_video\? acl videocache_allow_url url_regex > -i \.googlevideo\.com\/videoplayback \.googlevideo\.com\/videoplay \.googlevideo\.com\/get_video\? acl videocache_allow_url url_regex -i > \.google\.com\/videoplayback \.google\.com\/videoplay > \.google\.com\/get_video\? acl videocache_allow_url url_regex -i > \.google\.[a-z][a-z]\/videoplayback \.google\.[a-z][a-z]\/videoplay > \.google\.[a-z][a-z]\/get_video\? acl videocache_allow_url url_regex > -i proxy[a-z0-9\-][a-z0-9][a-z0-9][a-z0-9]?\.dailymotion\.com\/ acl videocache_allow_url url_regex -i vid\.akm\.dailymotion\.com\/ acl > videocache_allow_url url_regex -i > [a-z0-9][0-9a-z][0-9a-z]?[0-9a-z]?[0-9a-z]?\.xtube\.com\/(.*)flv acl > videocache_allow_url url_regex -i \.vimeo\.com\/(.*)\.(flv|mp4) acl > videocache_allow_url url_regex -i > va\.wrzuta\.pl\/wa[0-9][0-9][0-9][0-9]? acl videocache_allow_url > url_regex -i \.youporn\.com\/(.*)\.flv acl videocache_allow_url > url_regex -i \.msn\.com\.edgesuite\.net\/(.*)\.flv acl > videocache_allow_url url_regex -i \.tube8\.com\/(.*)\.(flv|3gp) acl > videocache_allow_url url_regex -i \.mais\.uol\.com\.br\/(.*)\.flv acl > videocache_allow_url url_regex -i > \.blip\.tv\/(.*)\.(flv|avi|mov|mp3|m4v|mp4|wmv|rm|ram|m4v) acl > videocache_allow_url url_regex -i > \.apniisp\.com\/(.*)\.(flv|avi|mov|mp3|m4v|mp4|wmv|rm|ram|m4v) acl > videocache_allow_url url_regex -i \.break\.com\/(.*)\.(flv|mp4) acl > videocache_allow_url url_regex -i redtube\.com\/(.*)\.flv acl > videocache_allow_dom dstdomain .mccont.com .metacafe.com > .cdn.dailymotion.com acl videocache_deny_dom dstdomain > .download.youporn.com .static.blip.tv acl dontrewrite url_regex > redbot\.org \.php acl getmethod method GET > > storeurl_access deny dontrewrite storeurl_access deny !getmethod > storeurl_access deny videocache_deny_dom storeurl_access allow > videocache_allow_url storeurl_access allow videocache_allow_dom > storeurl_access deny all > > storeurl_rewrite_program /etc/squid/storeurl.pl > storeurl_rewrite_children 7 storeurl_rewrite_concurrency 10 > > acl store_rewrite_list urlpath_regex -i > \/(get_video\?|videodownload\?|videoplayback.*id) acl > store_rewrite_list urlpath_regex -i \.flv$ \.mp3$ \.mp4$ \.swf$ \ > storeurl_access allow store_rewrite_list storeurl_access deny all > > refresh_pattern -i \.flv$ 10080 80% 10080 override-expire > override-lastmod reload-into-ims ignore-reload ignore-no-cache > ignore-private ignore-auth refresh_pattern -i \.mp3$ 10080 80% 10080 > override-expire override-lastmod reload-into-ims ignore-reload > ignore-no-cache ignore-private ignore-auth refresh_pattern -i \.mp4$ > 10080 80% 10080 override-expire override-lastmod reload-into-ims > ignore-reload ignore-no-cache ignore-private ignore-auth > refresh_pattern -i \.swf$ 10080 80% 10080 override-expire > override-lastmod reload-into-ims ignore-reload ignore-no-cache > ignore-private ignore-auth refresh_pattern -i \.gif$ 10080 80% 10080 > override-expire override-lastmod reload-into-ims ignore-reload > ignore-no-cache ignore-private ignore-auth refresh_pattern -i \.jpg$ > 10080 80% 10080 override-expire override-lastmod reload-into-ims > ignore-reload ignore-no-cache ignore-private ignore-auth > refresh_pattern -i \.jpeg$ 10080 80% 10080 override-expire > override-lastmod reload-into-ims ignore-reload ignore-no-cache > ignore-private ignore-auth refresh_pattern -i \.exe$ 10080 80% 10080 > override-expire override-lastmod reload-into-ims ignore-reload > ignore-no-cache ignore-private ignore-auth > > # 1 year = 525600 mins, 1 month = 10080 mins, 1 day = 1440 refresh_pattern (get_video\?|videoplayback\?|videodownload\?|\.flv?) > 10080 80% 10080 ignore-no-cache ignore-private override-expire > override-lastmod reload-into-ims refresh_pattern > (get_video\?|videoplayback\?id|videoplayback.*id|videodownload\?|\.flv?) > 10080 80% 10080 ignore-no-cache ignore-private override-expire > override-lastmod reload-into-ims refresh_pattern \.(ico|video-stats) > 10080 80% 10080 override-expire ignore-reload ignore-no-cache > ignore-private ignore-auth override-lastmod negative-ttl=10080 > refresh_pattern \.etology\? 10080 > 80% 10080 override-expire ignore-reload ignore-no-cache > refresh_pattern galleries\.video(\?|sz) 10080 > 80% 10080 override-expire ignore-reload ignore-no-cache > refresh_pattern brazzers\? 10080 > 80% 10080 override-expire ignore-reload ignore-no-cache > refresh_pattern \.adtology\? 10080 > 80% 10080 override-expire ignore-reload ignore-no-cache > refresh_pattern > ^.*(utm\.gif|ads\?|rmxads\.com|ad\.z5x\.net|bh\.contextweb\.com|bstats\.adbrite\.com|a1\.interclick\.com|ad\.trafficmp\.com|ads\.cubics\.com|ad\.xtendmedia\.com|\.googlesyndication\.com|advertising\.com|yieldmanager|game-advertising\.com|pixel\.quantserve\.com|adperium\.com|doubleclick\.net|adserving\.cpxinteractive\.com|syndication\.com|media.fastclick.net).* > 10080 20% 10080 ignore-no-cache ignore-private override-expire > ignore-reload ignore-auth negative-ttl=40320 max-stale=10 > refresh_pattern ^.*safebrowsing.*google 10080 80% 10080 > override-expire ignore-reload ignore-no-cache ignore-private > ignore-auth negative-ttl=10080 refresh_pattern > ^http://((cbk|mt|khm|mlt)[0-9]?)\.google\.co(m|\.uk) 10080 80% > 10080 override-expire ignore-reload ignore-private negative-ttl=10080 > refresh_pattern ytimg\.com.*\.jpg > 10080 80% 10080 override-expire ignore-reload refresh_pattern > images\.friendster\.com.*\.(png|gif) 10080 80% > 10080 override-expire ignore-reload refresh_pattern garena\.com > 10080 80% 10080 override-expire reload-into-ims refresh_pattern > photobucket.*\.(jp(e?g|e|2)|tiff?|bmp|gif|png) 10080 80% > 10080 override-expire ignore-reload refresh_pattern > vid\.akm\.dailymotion\.com.*\.on2\? 10080 80% > 10080 ignore-no-cache override-expire override-lastmod refresh_pattern > mediafire.com\/images.*\.(jp(e?g|e|2)|tiff?|bmp|gif|png) 10080 80% > 10080 reload-into-ims override-expire ignore-private refresh_pattern > ^http:\/\/images|pics|thumbs[0-9]\. 10080 80% > 10080 reload-into-ims ignore-no-cache ignore-reload override-expire > refresh_pattern ^http:\/\/www.onemanga.com.*\/ > 10080 80% 10080 reload-into-ims ignore-no-cache ignore-reload > override-expire refresh_pattern > ^http://v\.okezone\.com/get_video\/([a-zA-Z0-9]) 10080 80% 10080 > override-expire ignore-reload ignore-no-cache ignore-private > ignore-auth override-lastmod negative-ttl=10080 > > #images facebook refresh_pattern -i \.facebook.com.*\.(jpg|png|gif) 10080 80% 10080 ignore-reload override-expire ignore-no-cache > refresh_pattern -i \.fbcdn.net.*\.(jpg|gif|png|swf|mp3) > 10080 80% 10080 ignore-reload override-expire ignore-no-cache > refresh_pattern static\.ak\.fbcdn\.net*\.(jpg|gif|png) > 10080 80% 10080 ignore-reload override-expire ignore-no-cache > refresh_pattern ^http:\/\/profile\.ak\.fbcdn.net*\.(jpg|gif|png) > 10080 80% 10080 ignore-reload override-expire ignore-no-cache > > #All File refresh_pattern -i \.(3gp|7z|ace|asx|bin|deb|divx|dvr-ms|ram|rpm|exe|inc|cab|qt) > 10080 80% 10080 ignore-no-cache override-expire override-lastmod > reload-into-ims refresh_pattern -i > \.(rar|jar|gz|tgz|bz2|iso|m1v|m2(v|p)|mo(d|v)|arj|lha|lzh|zip|tar) > 10080 80% 10080 ignore-no-cache override-expire override-lastmod > reload-into-ims refresh_pattern -i > \.(jp(e?g|e|2)|gif|pn[pg]|bm?|tiff?|ico|swf|dat|ad|txt|dll) > 10080 80% 10080 ignore-no-cache override-expire override-lastmod > reload-into-ims refresh_pattern -i > \.(avi|ac4|mp(e?g|a|e|1|2|3|4)|mk(a|v)|ms(i|u|p)|og(x|v|a|g)|rm|r(a|p)m|snd|vob) > 10080 80% 10080 ignore-no-cache override-expire override-lastmod > reload-into-ims refresh_pattern -i > \.(pp(t?x)|s|t)|pdf|rtf|wax|wm(a|v)|wmx|wpl|cb(r|z|t)|xl(s?x)|do(c?x)|flv|x-flv) > 10080 80% 10080 ignore-no-cache override-expire override-lastmod > reload-into-ims > > refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern ^gopher: > 1440 0% 1440 refresh_pattern ^ftp: 10080 95% 10080 > override-lastmod reload-into-ims refresh_pattern . 1440 > 95% 10080 override-lastmod reload-into-ims

    Read the article

  • How to correctly use DERIVE or COUNTER in munin plugins

    - by Johan
    I'm using munin to monitor my server. I've been able to write plugins for it, but only if the graph type is GAUGE. When I try COUNTER or DERIVE, no data is logged or graphed. The plugin i'm currently stuck on is for monitoring bandwidth usage, and is as follows: /etc/munin/plugins/bandwidth2 #!/bin/sh if [ "$1" = "config" ]; then echo 'graph_title Bandwidth Usage 2' echo 'graph_vlabel Bandwidth' echo 'graph_scale no' echo 'graph_category network' echo 'graph_info Bandwidth usage.' echo 'used.label Used' echo 'used.info Bandwidth used so far this month.' echo 'used.type DERIVE' echo 'used.min 0' echo 'remain.label Remaining' echo 'remain.info Bandwidth remaining this month.' echo 'remain.type DERIVE' echo 'remain.min 0' exit 0 fi cat /var/log/zen.log The contents of /var/log/zen.log are: used.value 61.3251953125 remain.value 20.0146484375 And the resulting database is: <!-- Round Robin Database Dump --><rrd> <version> 0003 </version> <step> 300 </step> <!-- Seconds --> <lastupdate> 1269936605 </lastupdate> <!-- 2010-03-30 09:10:05 BST --> <ds> <name> 42 </name> <type> DERIVE </type> <minimal_heartbeat> 600 </minimal_heartbeat> <min> 0.0000000000e+00 </min> <max> NaN </max> <!-- PDP Status --> <last_ds> 61.3251953125 </last_ds> <value> NaN </value> <unknown_sec> 5 </unknown_sec> </ds> <!-- Round Robin Archives --> <rra> <cf> AVERAGE </cf> <pdp_per_row> 1 </pdp_per_row> <!-- 300 seconds --> <params> <xff> 5.0000000000e-01 </xff> </params> <cdp_prep> <ds> <primary_value> NaN </primary_value> <secondary_value> NaN </secondary_value> <value> NaN </value> <unknown_datapoints> 0 </unknown_datapoints> </ds> </cdp_prep> <database> <!-- 2010-03-28 09:15:00 BST / 1269764100 --> <row><v> NaN </v></row> <!-- 2010-03-28 09:20:00 BST / 1269764400 --> <row><v> NaN </v></row> <!-- 2010-03-28 09:25:00 BST / 1269764700 --> <row><v> NaN </v></row> <snip> The value for last_ds is correct, it just doesn't seem to make it into the actual database. If I change DERIVE to GAUGE, it works as expected. munin-run bandwidth2 outputs the contents of /var/log/zen.log I've been all over the (sparse) docs for munin plugins, and can't find my mistake. Modifying an existing plugin didn't work for me either.

    Read the article

  • Again WPA Connection problem even after changed to latest version ..please help

    - by Renjith G
    I am using hostapd, wireless tools with madwifi for my wireless ap in my board. The WEP, WPA-PSK connections and communications between my board with linux and my desktop PC, Windows XP SP2 (with Olitec USB wireless) are fine. But when I configured the WPA type, the connection seems established but shows the status "TKIP - Key Absent" in the security dialog box. Anyone faced this problem? Am attaching the conf files and the connection status. In the AP side am complaining . I am using the in built radius server conf with the hostapd 0.4.7 hostapd.conf interface=ath0 driver=madwifi logger_syslog=0 logger_syslog_level=0 logger_stdout=0 logger_stdout_level=0 debug=0 eapol_key_index_workaround=1 dump_file=/tmp/hostapd.dump.0.0 ssid=Renjith G wpa wpa=1 wpa_passphrase=mypassphrase wpa_key_mgmt=WPA-EAP wpa_pairwise=TKIP CCMP wpa_group_rekey=600 macaddr_acl=2 /* commented */ ieee8021x=1 /* commented */ eap_authenticator=1 own_ip_addr=172.16.25.1 nas_identifier=renjithg.com auth_server_addr=172.16.25.1 auth_server_port=1812 auth_server_shared_secret=key1 ca_cert=/flash1/ca.crt server_cert=/flash1/server.crt eap_user_file=/etc/hostapd.eap_user hostapd.eap_user "*@renjithg.com" TLS And the commands am using are wlanconfig ath0 create wlandev wifi0 wlanmode ap iwconfig ath0 essid Renjith channel 6 ifconfig ath0 192.168.25.1 netmask 255.255.255.0 up hostapd -ddd /etc/hostapd.conf Please correct if am wrong .. Also am getting the debug messages on my AP when am connecting in my windows machine through WPA ~/wlanexe # ./hostapd -ddd /etc/hostapd.conf Configuration file: /etc/hostapd.conf Line 18: obsolete eap_authenticator used; this has been renamed to eap_server madwifi_set_iface_flags: dev_up=0 Using interface ath0 with hwaddr 00:0b:6b:33:8c:30 and ssid 'Renjith G wpa' madwifi_set_ieee8021x: enabled=1 madwifi_configure_wpa: group key cipher=1 madwifi_configure_wpa: pairwise key ciphers=0xa madwifi_configure_wpa: key management algorithms=0x1 madwifi_configure_wpa: rsn capabilities=0x0 madwifi_configure_wpa: enable WPA= 0x1 madwifi_set_iface_flags: dev_up=1 madwifi_set_privacy: enabled=1 WPA: group state machine entering state GTK_INIT GMK - hexdump(len=32): 9c 77 cd 38 5a 60 3b 16 8a 22 90 e8 65 b3 c2 86 40 5c be c3 dd 84 3e df 58 1d 16 61 1d 13 d1 f2 GTK - hexdump(len=32): 02 78 d7 d3 5d 15 e3 89 9c 62 a8 fe 8a 0f 40 28 ba dc cd bc 07 f4 59 88 1c 08 84 2b 49 3d e2 32 WPA: group state machine entering state SETKEYSDONE madwifi_set_key: alg=TKIP addr=00:00:00:00:00:00 key_idx=1 Flushing old station entries madwifi_sta_deauth: addr=ff:ff:ff:ff:ff:ff reason_code=3 Deauthenticate all stations l2_packet_receive - recvfrom: Network is down Wireless event: cmd=0x8c03 len=20 New STA WPA: 00:0a:78:a0:0b:09 WPA_PTK entering state INITIALIZE madwifi_del_key: addr=00:0a:78:a0:0b:09 key_idx=0 WPA: 00:0a:78:a0:0b:09 WPA_PTK_GROUP entering state IDLE WPA: 00:0a:78:a0:0b:09 WPA_PTK entering state AUTHENTICATION WPA: 00:0a:78:a0:0b:09 WPA_PTK entering state AUTHENTICATION2 IEEE 802.1X: 4 bytes from 00:0a:78:a0:0b:09 IEEE 802.1X: version=1 type=1 length=0 Wireless event: cmd=0x8c04 len=20 madwifi_del_key: addr=00:0a:78:a0:0b:09 key_idx=0 ioctl[unknown???]: Invalid argument WPA: 00:0a:78:a0:0b:09 WPA_PTK entering state DISCONNECTED WPA: 00:0a:78:a0:0b:09 WPA_PTK entering state INITIALIZE madwifi_del_key: addr=00:0a:78:a0:0b:09 key_idx=0 ioctl[unknown???]: Invalid argument Wireless event: cmd=0x8c03 len=20 New STA WPA: 00:0a:78:a0:0b:09 WPA_PTK entering state INITIALIZE madwifi_del_key: addr=00:0a:78:a0:0b:09 key_idx=0 WPA: 00:0a:78:a0:0b:09 WPA_PTK_GROUP entering state IDLE WPA: 00:0a:78:a0:0b:09 WPA_PTK entering state AUTHENTICATION WPA: 00:0a:78:a0:0b:09 WPA_PTK entering state AUTHENTICATION2 IEEE 802.1X: 4 bytes from 00:0a:78:a0:0b:09 IEEE 802.1X: version=1 type=1 length=0 < Register Fail < Register Fail Wireless event: cmd=0x8c04 len=20 madwifi_del_key: addr=00:0a:78:a0:0b:09 key_idx=0 ioctl[unknown???]: Invalid argument WPA: 00:0a:78:a0:0b:09 WPA_PTK entering state DISCONNECTED WPA: 00:0a:78:a0:0b:09 WPA_PTK entering state INITIALIZE madwifi_del_key: addr=00:0a:78:a0:0b:09 key_idx=0 ioctl[unknown???]: Invalid argument Wireless event: cmd=0x8c03 len=20 New STA WPA: 00:0a:78:a0:0b:09 WPA_PTK entering state INITIALIZE madwifi_del_key: addr=00:0a:78:a0:0b:09 key_idx=0 WPA: 00:0a:78:a0:0b:09 WPA_PTK_GROUP entering state IDLE WPA: 00:0a:78:a0:0b:09 WPA_PTK entering state AUTHENTICATION WPA: 00:0a:78:a0:0b:09 WPA_PTK entering state AUTHENTICATION2 IEEE 802.1X: 4 bytes from 00:0a:78:a0:0b:09 IEEE 802.1X: version=1 type=1 length=0 NOW am getting the following error message with latest tools. *This is the latest error messages..please refer this only..* ~/wlanexe # ./hostapd -ddd /etc/hostapd.conf TLS: Trusted root certificate(s) loaded madwifi_set_iface_flags: dev_up=0 madwifi_set_privacy: enabled=0 BSS count 1, BSSID mask ff:ff:ff:ff:ff:ff (0 bits) Flushing old station entries madwifi_sta_deauth: addr=ff:ff:ff:ff:ff:ff reason_code=3 ioctl[IEEE80211_IOCTL_SETMLME]: Invalid argument madwifi_sta_deauth: Failed to deauth STA (addr ff:ff:ff:ff:ff:ff reason 3) Could not connect to kernel driver. Deauthenticate all stations madwifi_sta_deauth: addr=ff:ff:ff:ff:ff:ff reason_code=2 ioctl[IEEE80211_IOCTL_SETMLME]: Invalid argument madwifi_sta_deauth: Failed to deauth STA (addr ff:ff:ff:ff:ff:ff reason 2) madwifi_set_privacy: enabled=0 madwifi_del_key: addr=00:00:00:00:00:00 key_idx=0 madwifi_del_key: addr=00:00:00:00:00:00 key_idx=1 madwifi_del_key: addr=00:00:00:00:00:00 key_idx=2 madwifi_del_key: addr=00:00:00:00:00:00 key_idx=3 Using interface ath0 with hwaddr 00:0b:6b:33:8c:30 and ssid 'RenjithGwpa' SSID - hexdump_ascii(len=11): 52 65 6e 6a 69 74 68 47 77 70 61 RenjithGwpa PSK (ASCII passphrase) - hexdump_ascii(len=12): 6d 79 70 61 73 73 70 68 72 61 73 65 mypassphrase PSK (from passphrase) - hexdump(len=32): a6 55 3e 76 94 8b d9 81 a1 22 5e 24 29 83 33 86 11 a8 7e 93 19 7c a9 ab ab cc 12 58 37 e5 35 b6 RADIUS local address: 172.16.25.1:1024 madwifi_set_ieee8021x: enabled=1 madwifi_configure_wpa: group key cipher=1 madwifi_configure_wpa: pairwise key ciphers=0xa madwifi_configure_wpa: key management algorithms=0x1 madwifi_configure_wpa: rsn capabilities=0x0 madwifi_configure_wpa: enable WPA=0x1 WPA: group state machine entering state GTK_INIT (VLAN-ID 0) GMK - hexdump(len=32): [REMOVED] GTK - hexdump(len=32): [REMOVED] WPA: group state machine entering state SETKEYSDONE (VLAN-ID 0) madwifi_set_key: alg=TKIP addr=00:00:00:00:00:00 key_idx=1 madwifi_set_privacy: enabled=1 madwifi_set_iface_flags: dev_up=1 ath0: Setup of interface done. l2_packet_receive - recvfrom: Network is down Wireless event: cmd=0x8b1a len=24 Wireless event: cmd=0x8c03 len=20 New STA ioctl[unknown???]: Invalid argument madwifi_process_wpa_ie: Failed to get WPA/RSN IE Failed to get WPA/RSN information element. Data frame from not associated STA 00:0a:78:a0:0b:09 Wireless event: cmd=0x8c04 len=20 Wireless event: cmd=0x8c03 len=20 New STA ioctl[unknown???]: Invalid argument madwifi_process_wpa_ie: Failed to get WPA/RSN IE Failed to get WPA/RSN information element. Data frame from not associated STA 00:0a:78:a0:0b:09 Data frame from not associated STA 00:0a:78:a0:0b:09 Data frame from not associated STA 00:0a:78:a0:0b:09 Wireless event: cmd=0x8c04 len=20 Wireless event: cmd=0x8c03 len=20 New STA ioctl[unknown???]: Invalid argument madwifi_process_wpa_ie: Failed to get WPA/RSN IE Failed to get WPA/RSN information element. Data frame from not associated STA 00:0a:78:a0:0b:09

    Read the article

  • Linux policy routing - packets not coming back

    - by Bugsik
    i am trying to set up policy routing on my home server. My network looks like this: Host routed VPN gateway Internet link through VPN 192.168.0.35/24 ---> 192.168.0.5/24 ---> 192.168.0.1 DSL router 10.200.2.235/22 .... .... 10.200.0.1 VPN server The traffic from 192.168.0.32/27 should be and is routed through VPN. I wanted to define some routing policies to route some traffic from 192.168.0.5 through VPN as well - for start - from user with uid 2000. Policy routing is done using iptables mark target and ip rule fwmark. The problem: When connecting using user 2000 from 192.168.0.5 tcpdump shows outgoing packets, but nothing comes back. Traffic from 192.168.0.35 works fine (here I am not using fwmark but src policy). Here is my VPN gateway setup: # uname -a Linux placebo 3.2.0-34-generic #53-Ubuntu SMP Thu Nov 15 10:49:02 UTC 2012 i686 i686 i386 GNU/Linux # iptables -V iptables v1.4.12 # ip -V ip utility, iproute2-ss111117 IPtables rules (all policies in table filter are ACCEPT) # iptables -t mangle -nvL Chain PREROUTING (policy ACCEPT 770K packets, 314M bytes) pkts bytes target prot opt in out source destination Chain INPUT (policy ACCEPT 767K packets, 312M bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 5520 packets, 1920K bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 782K packets, 901M bytes) pkts bytes target prot opt in out source destination 74 4707 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner UID match 2000 MARK set 0x3 Chain POSTROUTING (policy ACCEPT 788K packets, 903M bytes) pkts bytes target prot opt in out source destination # iptables -t nat -nvL Chain PREROUTING (policy ACCEPT 996 packets, 51172 bytes) pkts bytes target prot opt in out source destination Chain INPUT (policy ACCEPT 7 packets, 432 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 1364 packets, 112K bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 2302 packets, 160K bytes) pkts bytes target prot opt in out source destination 119 7588 MASQUERADE all -- * vpn 0.0.0.0/0 0.0.0.0/0 Routing: # ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master lan state UNKNOWN qlen 1000 link/ether 00:40:63:f9:c3:8f brd ff:ff:ff:ff:ff:ff valid_lft forever preferred_lft forever 3: lan: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 00:40:63:f9:c3:8f brd ff:ff:ff:ff:ff:ff inet 192.168.0.5/24 brd 192.168.0.255 scope global lan inet6 fe80::240:63ff:fef9:c38f/64 scope link valid_lft forever preferred_lft forever 4: vpn: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 100 link/none inet 10.200.2.235/22 brd 10.200.3.255 scope global vpn # ip rule show 0: from all lookup local 32764: from all fwmark 0x3 lookup VPN 32765: from 192.168.0.32/27 lookup VPN 32766: from all lookup main 32767: from all lookup default # ip route show table VPN default via 10.200.0.1 dev vpn 10.200.0.0/22 dev vpn proto kernel scope link src 10.200.2.235 192.168.0.0/24 dev lan proto kernel scope link src 192.168.0.5 # ip route show default via 192.168.0.1 dev lan metric 100 10.200.0.0/22 dev vpn proto kernel scope link src 10.200.2.235 192.168.0.0/24 dev lan proto kernel scope link src 192.168.0.5 TCP dump showing no traffic coming back when connection is made from 192.168.0.5 user 2000 # tcpdump -i vpn tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on vpn, link-type RAW (Raw IP), capture size 65535 bytes ### Traffic from user 2000 on 192.168.0.5 ### 10:19:05.629985 IP 10.200.2.235.37291 > 10.100-78-194.akamai.com.http: Flags [S], seq 2868799562, win 14600, options [mss 1460,sackOK,TS val 6887764 ecr 0,nop,wscale 4], length 0 10:19:21.678001 IP 10.200.2.235.37291 > 10.100-78-194.akamai.com.http: Flags [S], seq 2868799562, win 14600, options [mss 1460,sackOK,TS val 6891776 ecr 0,nop,wscale 4], length 0 ### Traffic from 192.168.0.35 ### 10:23:12.066174 IP 10.200.2.235.49247 > 10.100-78-194.akamai.com.http: Flags [S], seq 2294159276, win 65535, options [mss 1460,nop,wscale 4,nop,nop,TS val 557451322 ecr 0,sackOK,eol], length 0 10:23:12.265640 IP 10.100-78-194.akamai.com.http > 10.200.2.235.49247: Flags [S.], seq 2521908813, ack 2294159277, win 14480, options [mss 1367,sackOK,TS val 388565772 ecr 557451322,nop,wscale 1], length 0 10:23:12.276573 IP 10.200.2.235.49247 > 10.100-78-194.akamai.com.http: Flags [.], ack 1, win 8214, options [nop,nop,TS val 557451534 ecr 388565772], length 0 10:23:12.293030 IP 10.200.2.235.49247 > 10.100-78-194.akamai.com.http: Flags [P.], seq 1:480, ack 1, win 8214, options [nop,nop,TS val 557451552 ecr 388565772], length 479 10:23:12.574773 IP 10.100-78-194.akamai.com.http > 10.200.2.235.49247: Flags [.], ack 480, win 7776, options [nop,nop,TS val 388566081 ecr 557451552], length 0

    Read the article

  • Gratuitous CRLF in Subject: line - why is it there, and is it legal?

    - by MadHatter
    I'm running into a problem with a NAGIOS system sending emails to a popular email-to-SMS service. The email-to-SMS service takes emails with text in the Subject: line, and sends them on to the mobile number encoded in the To: field. So far so good. Sadly, sendmail (and postfix before it) seem to be inserting a gratuitous CRLF into the (necessarily long) Subject: line, and that's causing my SMS messages to be truncated at the CRLF if and only if the Subject: line contains one or more colons past the gratuitous CRLF. I am confident that the messages are being created correctly, but just to be sure, here's me creating a completely noddy test message to myself, with a long Subject: line: echo "foo" | mail -s "1234567 101234567 201234567 301234567 401234567 501234567 601234567 701234567 801234567 90123456789" [email protected] Note there's no extra colon in this Subject: line; all I'm doing here is showing that an extra CRLF is inserted on the wire. Here's the result of sudo ngrep -x port 25: 44 61 74 65 3a 20 46 72    69 2c 20 33 31 20 4d 61    Date: Fri, 31 Ma 79 20 32 30 31 33 20 31    30 3a 34 33 3a 35 35 20    y 2013 10:43:55 2b 30 31 30 30 0d 0a 54    6f 3a 20 72 65 61 70 65    +0100..To: reape 72 40 74 65 61 70 61 72    74 79 2e 6e 65 74 0d 0a    [email protected].. 53 75 62 6a 65 63 74 3a    20 31 32 33 34 35 36 37    Subject: 1234567 20 31 30 31 32 33 34 35    36 37 20 32 30 31 32 33     101234567 20123 34 35 36 37 20 33 30 31    32 33 34 35 36 37 20 34    4567 301234567 4 30 31 32 33 34 35 36 37    20 35 30 31 32 33 34 35    01234567 5012345 36 37 0d 0a 20 36 30 31    32 33 34 35 36 37 20 37    67.. 601234567 7 30 31 32 33 34 35 36 37    20 38 30 31 32 33 34 35    01234567 8012345 36 37 20 39 30 31 32 33    34 35 36 37 38 39 0d 0a    67 90123456789.. 55 73 65 72 2d 41 67 65    6e 74 3a 20 48 65 69 72    User-Agent: Heir 6c 6f 6f 6d 20 6d 61 69    6c 78 20 31 32 2e 34 20    loom mailx 12.4 37 2f 32 39 2f 30 38 0d    0a 4d 49 4d 45 2d 56 65    7/29/08..MIME-Ve 72 73 69 6f 6e 3a 20 31    2e 30 0d 0a 43 6f 6e 74    rsion: 1.0..Cont 65 6e 74 2d 54 79 70 65    3a 20 74 65 78 74 2f 70    ent-Type: text/p 6c 61 69 6e 3b 20 63 68    61 72 73 65 74 3d 75 73    lain; charset=us About half way down (marked in bold+italic), between the 501234567 and the 601234567 in the original Subject: header, you can see a CRLF being inserted (0x0d 0x0a, on the left-hand side hex dump, .. on the right-hand side plain text). The receiving MTA seems happy to post-process this, and when I look at the on-disc stored mail at the receiving end, I see only a LF (0x0a) in the Subject: line, and the line is parsed correctly and in its entirety by, eg, alpine. Nevertheless, the CRLF is there on the wire, and between me and the (excellent) email-to-SMS support people, we've established that these are the cause of the problem. So my question is: is it lawful for an MTA to insert a gratuitous CRLF on the wire? If it is, and I can prove it, then it's the email-to-SMS house's problem, because they are being intolerant. If it isn't, or it is but I can't prove it, then it becomes my problem, so an answer with references would be most useful. Edit: I can now come clean that the email-to-SMS service in question is kapow. Once this problem was explained to them, they got it, worked with me to develop and test a fix, and have deployed the fix. My long subject lines with colons in now get relayed correctly into SMSes. I don't normally trumpet individual companies, especially not on SF, but I thought it worthy of note that kapow Did The Right Thing. (Disclaimer: I have no connection with kapow except as a paying customer who's happy about the way they dealt with his problem.)

    Read the article

  • Reverse lookup SERVFAIL

    - by Quan Tran
    I just set up a DNS server and a web server using Virtualbox. The IP address of the DNS server is 192.168.56.101 and the web server 192.168.56.102. Here are my configuration files for the DNS server: named.conf: // // named.conf // // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS // server as a caching only nameserver (as a localhost DNS resolver only). // // See /usr/share/doc/bind*/sample/ for example named configuration files. // options { directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; //query-source address * port 53; //forward first; forwarders { 8.8.8.8; 8.8.4.4; }; listen-on port 53 { 127.0.0.1; 192.168.56.0/24; }; allow-query { localhost; 192.168.56.0/24; }; recursion yes; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity debug 10; print-category yes; print-time yes; print-severity yes; }; }; zone "quantran.com" in { type master; file "named.quantran.com"; }; zone "56.168.192.in-addr.arpa" in { type master; file "named.192.168.56"; allow-update { none; }; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; named.quantran.com: $TTL 86400 quantran.com. IN SOA dns1.quantran.com. root.quantran.com. ( 100 ; serial 3600 ; refresh 600 ; retry 604800 ; expire 86400 ) IN NS dns1.quantran.com. dns1.quantran.com. IN A 192.168.56.101 www.quantran.com. IN A 192.168.56.102 named.192.168.56: $TTL 86400 $ORIGIN 56.168.192.in-addr.arpa. @ IN SOA dns1.quantran.com. root.quantran.com. ( 100 ; serial 3600 ; refresh 600 ; retry 604800 ; expire 86400 ) ; minimum IN NS dns1.quantran.com. 101.56.168.192.in-addr.arpa. IN PTR dns1.quantran.com. 102 IN PTR www.quantran.com. When I try a normal lookup from the host (I configured so that the only nameserver the host uses is the DNS server 192.168.56.101): quan@quantran:~$ host www.quantran.com www.quantran.com has address 192.168.56.102 quan@quantran:~$ host dns1.quantran.com dns1.quantran.com has address 192.168.56.101 But when I try a reverse lookup: quan@quantran:~$ host -v 192.168.56.101 192.168.56.101 Trying "101.56.168.192.in-addr.arpa" Using domain server: Name: 192.168.56.101 Address: 192.168.56.101#53 Aliases: Host 101.56.168.192.in-addr.arpa not found: 2(SERVFAIL) Received 45 bytes from 192.168.56.101#53 in 0 ms quan@quantran:~$ host -v 192.168.56.102 192.168.56.101 Trying "102.56.168.192.in-addr.arpa" Using domain server: Name: 192.168.56.101 Address: 192.168.56.101#53 Aliases: Host 102.56.168.192.in-addr.arpa not found: 2(SERVFAIL) Received 45 bytes from 192.168.56.101#53 in 0 ms So why can't I perform a reverse lookup? Anything wrong with the zone configuration files? Thanks in advance :) Oh, here is the output from the log file /var/named/data/named.run when I perform the reverse lookup: quan@quantran:~$ host 192.168.56.102 192.168.56.101 Using domain server: Name: 192.168.56.101 Address: 192.168.56.101#53 Aliases: Host 102.56.168.192.in-addr.arpa not found: 2(SERVFAIL) /var/named/data/named.run: 02-Jun-2014 15:18:11.950 client: debug 3: client 192.168.56.1#51786: UDP request 02-Jun-2014 15:18:11.950 client: debug 5: client 192.168.56.1#51786: using view '_default' 02-Jun-2014 15:18:11.950 security: debug 3: client 192.168.56.1#51786: request is not signed 02-Jun-2014 15:18:11.950 security: debug 3: client 192.168.56.1#51786: recursion available 02-Jun-2014 15:18:11.950 client: debug 3: client 192.168.56.1#51786: query 02-Jun-2014 15:18:11.950 client: debug 10: client 192.168.56.1#51786: ns_client_attach: ref = 1 02-Jun-2014 15:18:11.950 query-errors: debug 1: client 192.168.56.1#51786: query failed (SERVFAIL) for 102.56.168.192.in-addr.arpa/IN/PTR at query.c:5428 02-Jun-2014 15:18:11.950 client: debug 3: client 192.168.56.1#51786: error 02-Jun-2014 15:18:11.950 client: debug 3: client 192.168.56.1#51786: send 02-Jun-2014 15:18:11.950 client: debug 3: client 192.168.56.1#51786: sendto 02-Jun-2014 15:18:11.951 client: debug 3: client 192.168.56.1#51786: senddone 02-Jun-2014 15:18:11.951 client: debug 3: client 192.168.56.1#51786: next 02-Jun-2014 15:18:11.951 client: debug 10: client 192.168.56.1#51786: ns_client_detach: ref = 0 02-Jun-2014 15:18:11.951 client: debug 3: client 192.168.56.1#51786: endrequest 02-Jun-2014 15:18:11.951 client: debug 3: client @0xb537e008: udprecv Also, I made some changes to the log section in named.conf.

    Read the article

  • Bind: dns not 'spreaded'

    - by realtebo
    I've elfoip.net with bind $ whois elfoip.net | grep 'Name Server' Name Server: NS.ELFOIP.NET I need elfoip.net be able to serve third levels domain, like mickymouse.elfoip.net, etc... Yes, I'm trying to create an other useless dyndns clone. i've added some third level as A RR. Eg: executing this from the server itself $ dig @localhost mattinauno.elfoip.net ;; ANSWER SECTION: mattinauno.elfoip.net. 60 IN A 192.81.221.113 I was expecting in one or two days, from my pc i can digit in browser mattinauno.elfoip.net and get page a 192.81.221.113 But this is not happening. Are there any prerequisites to satisfy to allow dns of my isp to be able to forward dns resolution of *.elfoip.net to MY dns ? (Or to ask to him and then cache ?) TTL of zone is set a 5m I've not AllowQuey directive, is it necessary for other dns to cache from mine ? I've cheched the zone with bind utility named-checkzone but no error detected. How to diagnose why other dns doesn't take in account RR from mine ? from my home pc dig @ns.elfoip.net mattinauno.elfoip.net ;; ANSWER SECTION: mattinauno.elfoip.net. 60 IN A 192.81.221.113 ;; AUTHORITY SECTION: elfoip.net. 300 IN NS ns.elfoip.net. but dig @8.8.8.8 mattinauno.elfoip.net give no answers Whole zone file: note I've used nsupdate, so this file has been re-edited and re-formatted from this utility ! root@mirko:/var/named# cat elfoip.net.db $ORIGIN . $TTL 300 ; 5 minutes elfoip.net IN SOA ns.elfoip.net. hostmaster.elfoip.net. ( 2013062314 ; serial 3600 ; refresh (1 hour) 600 ; retry (10 minutes) 86400 ; expire (1 day) 60 ; minimum (1 minute) ) NS ns.elfoip.net. A 109.168.99.6 $ORIGIN elfoip.net. $TTL 60 ; 1 minute google A 173.194.35.56 maiscai A 192.81.221.113 mattinadue A 192.81.221.113 mattinauno A 192.81.221.113 $TTL 300 ; 5 minutes ns A 109.168.99.6 $TTL 60 ; 1 minute prova A 208.67.222.222 prova2 A 13.23.34.45 A 13.23.34.46 www CNAME elfoip.net. EDIT: added named.conf.local zone "elfoip.net" { type master; // file "/etc/bind/elfoip.net.db"; file "/var/named/elfoip.net.db"; allow-update { key elfoip.net ; }; }; EDIT: I've no setup list-on directive *EDIT Added a TCPDUMP after [email protected] wwww.elfoip.net from a machine which uses my company internal dns, who allow recursive query. root@mirko:~# tcpdump -i eth0 'port 53' tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 11:57:23.293611 IP host9-210-static.22-87-b.business.telecomitalia.it.45958 > mirko.elfoip.net.domain: 61337+ A? www.elfoip.net. (32) 11:57:23.294114 IP mirko.elfoip.net.domain > host9-210-static.22-87-b.business.telecomitalia.it.45958: 61337* 2/1/1 CNAME elfoip.net., A 109.168.99.6 (95) 11:57:23.294554 IP mirko.elfoip.net.59571 > google-public-dns-a.google.com.domain: 45851+ PTR? 9.210.22.87.in-addr.arpa. (42) 11:57:23.330444 IP google-public-dns-a.google.com.domain > mirko.elfoip.net.59571: 45851 1/0/0 PTR host9-210-static.22-87-b.business.telecomitalia.it. (106) 11:57:23.331181 IP mirko.elfoip.net.44171 > google-public-dns-a.google.com.domain: 33339+ PTR? 8.8.8.8.in-addr.arpa. (38) 11:57:23.439405 IP google-public-dns-a.google.com.domain > mirko.elfoip.net.44171: 33339 1/0/0 PTR google-public-dns-a.google.com. (82) 11:57:31.350654 IP host9-210-static.22-87-b.business.telecomitalia.it.30108 > mirko.elfoip.net.domain: 38269 [1au] A? ns.elfoip.net. (42) 11:57:31.351117 IP mirko.elfoip.net.domain > host9-210-static.22-87-b.business.telecomitalia.it.30108: 38269* 1/1/1 A 109.168.99.6 (72) If i dig @8.8.8.8 www.elfoip.net, NOTHING happens in dump log !

    Read the article

  • KVM Virtual guest Paused on Reboot

    - by David Hamilton
    I'm running REHL 6 and just installed a Ubuntu Server Guest via KVM set to start at boot. This works correctly and the guest loads, but it loads "paused" and requires that I manually un-pause it. Can someone give me a hint as to how I can I get the Guest OS to actually become active on boot? Here is the libvert dump as requested...Also tried libvert auto-start --- no effect. <domain type='kvm' id='1'> <name>MailServer</name> <uuid>a61dae75-1f5c-d536-718f-3c615d9b4868</uuid> <memory>4194304</memory> <currentMemory>4194304</currentMemory> <vcpu>4</vcpu> <os> <type arch='x86_64' machine='rhel6.0.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/home/MailServer/MailServer-1.img'/> <target dev='hda' bus='ide'/> <alias name='ide0-0-0'/> <address type='drive' controller='0' bus='0' unit='0'/> </disk> <disk type='block' device='cdrom'> <driver name='qemu' type='raw'/> <target dev='hdc' bus='ide'/> <readonly/> <alias name='ide0-1-0'/> <address type='drive' controller='0' bus='1' unit='0'/> </disk> <controller type='ide' index='0'> <alias name='ide0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <interface type='bridge'> <mac address='52:54:00:cd:f9:9f'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/1'/> <target port='0'/> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/1'> <source path='/dev/pts/1'/> <target port='0'/> <alias name='serial0'/> </console> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='5900' autoport='yes'/> <sound model='ac97'> <alias name='sound0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </sound> <video> <model type='cirrus' vram='9216' heads='1'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </memballoon> </devices> <seclabel type='dynamic' model='selinux'> <label>system_u:system_r:svirt_t:s0:c211,c271</label> <imagelabel>system_u:object_r:svirt_image_t:s0:c211,c271</imagelabel> </seclabel></domain>

    Read the article

  • Bad disks in ancient server

    - by Joel Coel
    I have a 1998-era Netware 3.12 server that runs everything on our campus: general ledger, purchasing, payroll, student information, grades, you name it. The server has an Adaptec RAID controller with two volumes: RAID 1, 2 17GB scsi disks, Seagate ST318417W RAID 5, 3 4GB scsi disks, 2 Seagate ST34573W and 1 ST34572W. We are currently in the early stages of a project to replace this system, but you don't just jump into a new system like that and so I need to keep this server running until at least November 2011. This week we had not one but two hard drives fail. Thankfully they are from different volumes and we're able to keep running for the moment, but given the close nature of these failures I have serious doubts that I'll be able to avoid catastrophic failure from this server through the November target as is without restoring the RAID redundancy — it'll only take one more drive failure anywhere and I'm completely hosed. We are fortunate enough to have exact match "spares" lying around for both drives, but the spares are in unknown condition. I tried swapping just them in, but the RAID controller isn't smart enough to handle this and it renders the system unbootable. As for the RAID controller itself, there is utility I can get into during POST via a Ctrl-A shortcut, but I can't do much useful from there. To actually manage volumes I must first boot in to Netware, at which point I can use CI/O Array Management Software Version 2.0 to actually look at volume information. I suspect that the normal way to manage things is to boot from a special floppy with the controller software on it, but that floppy is long gone. Going through the options in the RAID software, I think the only supported way to replace a disk in an existing RAID volume is to physically add the disk, boot up and configure it as a "spare" for a volume, force the volume to use the spare to replace an existing down disk (and at this point I'm only guessing) so that the down disk becomes the spare, repair the volume, remove the spare from the volume, and then shut down and remove the disk. Then start all over for the other failed disk. All this amounts to a lot of downtime, assuming I can even make it work and that my spares are any good. As for finding reliable spares, I have no clue where to even begin looking to find a new 4GB scsi drive, or even which exact scsi system I'm looking for, as it's gone through a few different iterations over time. Another option is to migrate this to a virtual machine (hyper-v), but all previous attempts we've made in this area have failed to get very far. When this machine was installed I was just graduating from high school, and so it requires lower level knowledge of netware and dos than I ever developed, or if I did have since forgotten (I'm not exactly a dos neophyte, either). Part of my problem is this is a high-use server, and taking it down for a few days to figure things out isn't gonna fly very well. As for the question, I'm looking for anything that might be helpful in this situation: a recommendation on a place to find good spares from this era, personal experience repairing RAID volumes using a similar controller or building a hyper-v vm from an old netware server, a line on a floppy with better software for the RAID controller, recommendation on a good Novell consultant in Nebraska that would be able to put things right, a whole other option I haven't considered yet, etc. Update: For backups, we have good (recently verified via restore) backups of the data only -- nothing for the software that actually runs things. Update 2: Just a progress report that I currently have a working Netware 3.12 install in VMWare Virtual Server 2.0, thanks largely to the guide I found here: http://cerbulescubogdan.blogspot.com/2010/11/novell-netware-312-on-vmware.html The next steps are preparing empty netware volumes to match the additional volumes on my existing server, taking a dump of everything on the C:\ drive and netware volumes on my existing server, and figuring out from that information what modules need added to netware, installing my licenses (we do still have that disk, if it's any good), and moving data over. I have approval to bring the server down for a week after the first of the year (sadly not before), so, aside from creating empty volumes, the rest of the work will have to wait until then. Final Update (Jan 5, 2011): I was able to get spares working in both raid arrays without data loss this week. Both are now listed by the controller as "FAULT TOLLERANT" (yay!). I was also able to build on the progress from my last update and now have a functional "spare" server in VMWare Server 2.0. The spare can run and use our erp software, but I can't put it into production because I can't (yet) print from that box (and I have no idea why). Even so, this VM will do in a pinch if I have no other choice, and between it and the repaired RAID arrays I'm comfortable pushing on until I can junk the machine in November.

    Read the article

  • Windows 7 cannot join samba domain

    - by Antonis Christofides
    I have a 3.5.6 samba server with a LDAP backend (both on Debian 6.0). I've been successfully adding Windows XP machines to the domain for years. I now try to add Windows 7. I have made the recommended registry changes, but I don't have any success so far. Here is what happens: 1. I go to computer name, select "Domain" instead of "Workgroup", type in the domain name, click OK. It asks me for the username and password of an account that can add computers to the domain; I enter them. After about 40 seconds, I get the following message: The following error occurred attempting to join the domain "ITIA": The specified computer account could not be found. Contact an administrator to verify the account is in the domain. If the account has been deleted unjoin, reboot, and rejoin the domain. Despite this, the samba server successfully creates the computer account. 2. Therefore, if I try again a second time, without deleting the already created computer account, I get a different error: The following error occurred attempting to join the domain "ITIA": The specified account already exists. (Note that until a while ago samba wasn't configured to automatically create computer accounts. What I did whenever I wanted an XP to join was to manually create it. When I first attempted to solve the Windows 7 join problem, I setup samba to do this automatically, as this is what most people do, as I understand, and I thought that it might be related. I haven't attempted to add an XP since I made this change, so I don't know if it works, but whether it works or not, the problem remains.) Update 1: Here are the relevant parts of smb.conf: [global] panic action = /usr/share/samba/panic-action %d workgroup = ITIA server string = Itia file server announce as = NT interfaces = 147.102.160.1 volume = %h passdb backend = ldapsam:ldap://ldap.itia.ntua.gr:389 ldap admin dn = uid=samba,ou=daemons,dc=itia,dc=ntua,dc=gr ldap ssl = off ldap suffix = dc=itia,dc=ntua,dc=gr ldap user suffix = ou=people ldap group suffix = ou=groups ldap machine suffix = ou=computers unix password sync = no add machine script = smbldap-useradd -w -i %u log file = /var/log/samba/samba-log.all log level = 3 max log size = 5000 syslog = 2 socket options = SO_KEEPALIVE TCP_NODELAY encrypt passwords = true password level = 1 security = user domain master = yes local master = no wins support = yes domain logons = yes idmap gid = 1000-2000 Update 2: The server has a single network interface eth1 (also an unused eth0 that shows up only in the kernel boot messages) and two ip addresses; the main, 147.102.160.1, and an additional one, 147.102.160.37, that comes up with "ip addr add 147.102.160.37/32 dev eth1" (used only for a web site that has a different certificate than other web sites served from the same machine). One of the problems I recently faced was that samba was using the latter IP address. I fixed that by adding the "interfaces = 147.102.160.1" statement in smb.conf. Now: acheloos:/etc/apache2# tcpdump host 147.102.160.40 and not port 5900 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes 13:13:56.549048 IP lithaios.itia.civil.ntua.gr.netbios-dgm > 147.102.160.255.netbios-dgm: NBT UDP PACKET(138) 13:13:56.549056 ARP, Request who-has acheloos2.itia.civil.ntua.gr tell lithaios.itia.civil.ntua.gr, length 46 13:13:56.549091 ARP, Reply acheloos2.itia.civil.ntua.gr is-at 00:10:4b:b4:9e:59 (oui Unknown), length 28 13:13:56.549324 IP acheloos.itia.civil.ntua.gr.netbios-dgm > lithaios.itia.civil.ntua.gr.netbios-dgm: NBT UDP PACKET(138) 13:13:56.549608 IP lithaios.itia.civil.ntua.gr.netbios-dgm > acheloos2.itia.civil.ntua.gr.netbios-dgm: NBT UDP PACKET(138) 13:13:56.549741 IP acheloos.itia.civil.ntua.gr.netbios-dgm > lithaios.itia.civil.ntua.gr.netbios-dgm: NBT UDP PACKET(138) 13:13:56.550364 IP lithaios.itia.civil.ntua.gr.netbios-dgm > acheloos.itia.civil.ntua.gr.netbios-dgm: NBT UDP PACKET(138) 13:13:56.550468 IP acheloos.itia.civil.ntua.gr.netbios-dgm > lithaios.itia.civil.ntua.gr.netbios-dgm: NBT UDP PACKET(138) (acheloos2 is the second IP address, 147.102.160.37). The above dump occurs when I click "OK" (to join the domain), until it asks me for the username and password of a user that can join the domain. I don't know why the client is contacting the second IP address. I tried temporarily deactivating it, but I still had some related ARP traffic (though I think not IP traffic).

    Read the article

  • Python unicode Decode Error SUDs

    - by PylonsN00b
    OK so I have # -*- coding: utf-8 -*- at the top of my script and it worked for being able to pull data from the database that had funny chars(Ñ ,Õ,é,—,–,’,…) in it and store that data into variables...but I have run into other problems, see I pull my data, organize it, and then dump it into a variables like so: title = product[1] Where product[1] is from my database result set Then I load it up for Suds like so: array_of_inventory_item_submit = ca_client_inventory.factory.create('ArrayOfInventoryItemSubmit') for product in products: inventory_item_submit = ca_client_inventory.factory.create('InventoryItemSubmit') inventory_item_list = get_item_list(product) inventory_item_submit = [inventory_item_list] array_of_inventory_item_submit.InventoryItemSubmit.append(inventory_item_submit) #Call that service baby! ca_client_inventory.service.SynchInventoryItemList(accountID, array_of_inventory_item_submit) Where get_item_list sets product[1] to title and (including a whole bunch of other nodes): inventory_item_submit.Title = title So everything runs fine until I call ca_client_inventory.service.SynchInventoryItemList that contains array_of_inventory_item_submit which contains the title w/ the funky char...here is the error: Traceback (most recent call last): File "upload_all_inventory_ebay.py", line 421, in <module> ca_client_inventory.service.SynchInventoryItemList(accountID, array_of_inventory_item_submit) File "build/bdist.macosx-10.6-i386/egg/suds/client.py", line 539, in __call__ File "build/bdist.macosx-10.6-i386/egg/suds/client.py", line 592, in invoke File "build/bdist.macosx-10.6-i386/egg/suds/bindings/binding.py", line 118, in get_message File "build/bdist.macosx-10.6-i386/egg/suds/bindings/document.py", line 63, in bodycontent File "build/bdist.macosx-10.6-i386/egg/suds/bindings/document.py", line 105, in mkparam File "build/bdist.macosx-10.6-i386/egg/suds/bindings/binding.py", line 260, in mkparam File "build/bdist.macosx-10.6-i386/egg/suds/mx/core.py", line 62, in process File "build/bdist.macosx-10.6-i386/egg/suds/mx/core.py", line 75, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 102, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 243, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 182, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/core.py", line 75, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 102, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 298, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 182, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/core.py", line 75, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 102, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 298, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 182, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/core.py", line 75, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 102, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 243, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 182, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/core.py", line 75, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 102, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 198, in append File "build/bdist.macosx-10.6-i386/egg/suds/sax/element.py", line 251, in setText File "build/bdist.macosx-10.6-i386/egg/suds/sax/text.py", line 43, in __new__ UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 116: ordinal not in range(128) Now what? My guess is my script can take in these funky chars because I have # -*- coding: utf-8 -*- at the top but Suds does NOT have that at the top of its files. Do I really want to go and change the Suds files...we all know this is the least desired last possible solution...what can I do?

    Read the article

  • W3WP crashes when initializing a collection

    - by asbjornu
    I've created an ASP.NET MVC application that has an initializer attached to the PreApplicationStartMethodAttribute. When initializing, a collection is instantiated that implements an interface I've defined. When I instantiate this collection, w3wp.exe crashes with the following two incomprehensible entries in the event log: Faulting application name: w3wp.exe, version: 7.5.7600.16385, time stamp: 0x4a5bd0eb Faulting module name: clr.dll, version: 4.0.30319.1, time stamp: 0x4ba21eeb Exception code: 0xc00000fd Fault offset: 0x0000000000001177 Faulting process id: 0x1348 Faulting application start time: 0x01cb0224882f4723 Faulting application path: c:\windows\system32\inetsrv\w3wp.exe Faulting module path: C:\Windows\Microsoft.NET\Framework64\v4.0.30319\clr.dll Report Id: c6a0941e-6e17-11df-864d-000acd16dcdb And: Fault bucket , type 0 Event Name: APPCRASH Response: Not available Cab Id: 0 Problem signature: P1: w3wp.exe P2: 7.5.7600.16385 P3: 4a5bd0eb P4: clr.dll P5: 4.0.30319.1 P6: 4ba21eeb P7: c00000fd P8: 0000000000001177 P9: P10: Attached files: These files may be available here: Analysis symbol: Rechecking for solution: 0 Report Id: c6a0941e-6e17-11df-864d-000acd16dcdb Report Status: 0 If I remove the instantiation of the collection, the application starts normally. If I leave the instantiation, w3wp crashes. If I modify the interface, w3wp still crashes. I've tried every variation I could come up with on the theme of keeping the instantiation but doing everything else differently, but w3wp still crashes. My biggest issue here is that I have absolutely no idea why w3wp is crashing. It's not a StackOverflowException or anything concrete like that, all I get is the unintelligent junk cited above. I've tried to use DebugDiag and IISState to debug the w3wp process, but DebugDiag is only available for post-dump analysis in x64 (I'm running on Windows 7 x64, so the w3wp process is thus 64 bit) and IISStat says the following when I try to run it: D:\Programs\iisstate>IISState.exe -p 9204 -d Symbol search path is: SRV*D:\Programs\iisstate\symbols*http://msdl.microsoft.com/download/symbols IISState is limited to processes associated with IIS. If you require a generic debugger, please use WinDBG or CDB. They are available for download from http://www.microsoft.com/ddk/debugging. This error may also occur if a debugger is already attached to the process being checked. Incorrect Process Attachment I've double-checked 10 times that the process ID of my w3wp process is correct. I'm suspecting that IISState too only can debug x86 processes. Setting a breakpoint anywhere in the application does absolutely nothing. The break point isn't hit and w3wp crashes as soon as the request comes through to IIS from the browser. Starting the application with F5 in Visual Studio 2010 or starting another application to get the w3wp process up and running and then attaching the VS2010 debugger to it and then visiting the faulting application doesn't help. I've also tried to add an HTTP module as described in KB-911816 as well as add this to my web.config file: <configuration> <runtime> <legacyUnhandledExceptionPolicy enabled="true" /> </runtime> </configuration> Needless to say, it makes absolutely no difference. So I'm left with no way to debug the w3wp process, no way to extract any information from it and complete garbage dumped in my event log. If anybody has any idea on how to debug this problem, please let me know! Update My collection was initialized based on RouteTable.Routes which might have thrown an exception (perhaps by not being initialized itself yet in such an early stage of the ASP.NET lifecycle). Postponing the communication with RouteTable.Routes until a later stage solved the problem. While I don't really need an answer to this question anymore, I still find it so obscure that I'll leave it for anyone to comment on and answer, because I found no existing posts on this problem anywhere, so it might be of good reference in the future.

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88  | Next Page >