Search Results

Search found 13494 results on 540 pages for 'desktop effects'.

Page 436/540 | < Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >

  • Delete Ubuntu and Grub from PC (and BIOS unaccessible)

    - by Temitope
    I've really done a mess while upgrading 12.04 to 12.10, or my PC did, or ubuntu did, I can't really tell. The situation now is I have a dual booting machine, Windows 7 and ubuntu 12.10 -When turning on the PC, I can't access boot options, I've tried every thing, esc, f1 f8, f10 (I have a Hp pavilion) but all I can see is a short-lasting screen with three lines reading something like "error:files not found" or "link not found". So the PC turns to the Grub loader screen. This is already a BIG problem. It means I can't change boot order, and I'm diperate, since it doensn't seem that repairing my operating systems will bring my BIOS back. -If I chose Ubuntu in grup, it loades but then frezes on the desktop, I may be having problem with unity3d, or compiz, which was reported to be crashing the first times I started ubuntu. Now, after turning off and on the pc several times, no crash is reported again. Ubuntu just load my background image and nothing else. Not the side menu, not the header, nothing. Althogh the system seems "functioning", when I press the power button on the PC, for example, the normal shutdown dialog window appears. -If I chose Windows, Grubs tells me that something is not found, and to press any key to continue; I do it and then Windows loads perfectly. What I now want to do is 1- use EasyBSD to change boot order and boot Windows partition first 2- delete the ubuntu partitions from within windows disk manager What I except happening is that My PC turns to, or near to factory booting settings: I press the power button and Windows loads without asking me nothing I have acess to all my hardisk from withing windows Is that what will really happen? Are there danger I'm not seeing? What I don't except happening is The BIOS access key return functioning How could I eventually solve this? I would like to reinstall Ubuntu, 32 bit version this time.

    Read the article

  • Enable wireless greyed out. Disabled by hard switch. Details inside

    - by ltlunatic
    Ok so the basic issue here is this: My wireless has not once enabled through many environments. Wubi install: Enable wireless greyed out Live cd: greyed out Partition alongside win7: greyed out. I installed all software updates, checked additional drivers, installed the latest driver for RTL8111/8168B. rfkill list shows phy0 and phy1. Phy0 has a hardblock on. everything else is unblocked. Now consider this also: I have both a wireless adapter inside my laptop and a wireless usb adapter outside. The internal adapter does not work, hence why I think hardblock is on even if the external slider in 'on'. I have tried commands such as rfkill unblock all as well as sudo rfkill unblock all. No wireless options in the BIOS. I have tried another laptop and desktop and they get wireless from the get go. Ubuntu on my laptop seems to see my belkin adapter as well as my internal (ralink) one. But trying to scan networks yields no results and says network is down always. Ubuntu version: 12.04 lts Ask me anything else you may need to help me, thanks.

    Read the article

  • Ubuntu 12.04 login screen flickers. “Could not write bytes: broken pipes”

    - by Brayn
    I use Ubuntu 12.04 x64 with a dual-boot setup. Yesterday it worked fine but this morning when I attempted to boot it gets to the login screen and then it just flickers, alternating between the login screen and the console showing boot items (mainly Apache, the last one being "Battery status" although it's a desktop) all with [OK] status. The only error that I can see is: "Could not write bytes: broken pipes" on top of the screen. The only things I can think of that could cause this are: This morning I had a removable hdd plugged in during boot time, which I usually don't have Yesterday I've installed Dwarven Fortress that requires some x32 libraries so I've installed ia32 using synaptic. As far as I know this shouldn't brake the system but I didn't reboot yesterday so I can't be sure. I've tried booting in recovery mode and tried running the utils there but still no luck. I've ran out of ideas. Thanks. EDIT: Forgot to mention that all partitions have plenty of free space EDIT2: In the end I just reinstalled Ubuntu as time was of the essence.

    Read the article

  • System.Reflection and InvokeMember, storing type, assembly, and object in a class

    - by Cyclone
    I have tested code to call a method inside of a compiled DLL, and it works. I created a class to store the object, type, and loaded assembly, but something is lost in translation because it is unable to find the member I wish to invoke. If I create a type, object, and assembly, and properly load everything into these and perform InvokeMember on the type, it works just fine. However, when I use the things inside of my class, it throws a MissingMemberException and does not invoke the member, obviously. What am I doing wrong? The member in question is a subroutine which takes one argument, a string. This is quite frustrating. Code being called: Dim MyLoadedAssembly As New LoadedAssembly() MyLoadedAssembly.MyAssembly = Assembly.LoadFrom("display.dll") MyLoadedAssembly.MyObject = MyLoadedAssembly.MyAssembly.CreateInstance("Display.UI.Window") MyLoadedAssembly.MyType = MyLoadedAssembly.MyAssembly.GetType("Display.UI.Window") Dim args() As Object = {"test"} MyLoadedAssembly.InvokeMember("Show", args) Private Class LoadedAssembly Public MyType As Type Public MyObject As Object Public MyAssembly As Assembly Public Function InvokeMember(ByVal name As String, ByVal args() As Object) Return MyType.InvokeMember(name, BindingFlags.Default Or BindingFlags.InvokeMethod Or BindingFlags.GetProperty Or BindingFlags.Instance, Nothing, MyObject, args) End Function End Class Code inside of display.dll: Namespace UI Public Class Window Private wind As New System.Windows.Forms.Form Public FullScreen As Boolean = False Public Overloads Sub Show(ByVal text As String) wind.Show() wind.Text = text End Sub Public Overloads Sub Show() wind.Show() End Sub End Class End Namespace The root namespace for display.dll is Display. Why is my code only working when not within this class? System.MissingMethodException was unhandled Message="Method 'Display.UI.Window.Show' not found." Source="mscorlib" StackTrace: at System.RuntimeType.InvokeMember(String name, BindingFlags bindingFlags, Binder binder, Object target, Object[] providedArgs, ParameterModifier[] modifiers, CultureInfo culture, String[] namedParams) at System.Type.InvokeMember(String name, BindingFlags invokeAttr, Binder binder, Object target, Object[] args) at IDE.IDE.LoadedAssembly.InvokeMember(String name, Object[] args) in C:\Documents and Settings\Davey\Desktop\RaptorScript\RaptorScript\RaptorScript\IDE.vb:line 69 at IDE.IDE.IDE_Load(Object sender, EventArgs e) in C:\Documents and Settings\Davey\Desktop\RaptorScript\RaptorScript\RaptorScript\IDE.vb:line 50 at System.EventHandler.Invoke(Object sender, EventArgs e) at System.Windows.Forms.Form.OnLoad(EventArgs e) at System.Windows.Forms.Form.OnCreateControl() at System.Windows.Forms.Control.CreateControl(Boolean fIgnoreVisible) at System.Windows.Forms.Control.CreateControl() at System.Windows.Forms.Control.WmShowWindow(Message& m) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.ScrollableControl.WndProc(Message& m) at System.Windows.Forms.ContainerControl.WndProc(Message& m) at System.Windows.Forms.Form.WmShowWindow(Message& m) at System.Windows.Forms.Form.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) at System.Windows.Forms.SafeNativeMethods.ShowWindow(HandleRef hWnd, Int32 nCmdShow) at System.Windows.Forms.Control.SetVisibleCore(Boolean value) at System.Windows.Forms.Form.SetVisibleCore(Boolean value) at System.Windows.Forms.Control.set_Visible(Boolean value) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.Run(ApplicationContext context) at Microsoft.VisualBasic.ApplicationServices.WindowsFormsApplicationBase.OnRun() at Microsoft.VisualBasic.ApplicationServices.WindowsFormsApplicationBase.DoApplicationModel() at Microsoft.VisualBasic.ApplicationServices.WindowsFormsApplicationBase.Run(String[] commandLine) at IDE.My.MyApplication.Main(String[] Args) in 17d14f5c-a337-4978-8281-53493378c1071.vb:line 81 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException:

    Read the article

  • QtOpenCl make errors. Please help.

    - by Skkard
    So I downloaded the ATI Stream SDK. I don't have a gpu now so I use the '-device cpu' and got the programs/examples in the OpenCl directory working by adding the directory to LD_LIBRARY_PATH etc. Now the problem is when installing QtOpenCl. configure script gives me: skkard@skkard-desktop:~/Applications/qt-labs-opencl$ ./configure This is the QtOpenCL configuration utility. Qt version ............. 4.6.2 qmake .................. /usr/bin/qmake OpenCL ................. yes OpenCL/OpenGL interop .. yes Extra QMAKE_CXXFLAGS ... Extra INCLUDEPATH ...... Extra LIBS ............. -lOpenCL QtOpenCL has been configured. Run '/usr/bin/make' to build. Make gives me: skkard@skkard-desktop:~/Applications/qt-labs-opencl$ make cd src/ && make -f Makefile make[1]: Entering directory `/home/skkard/Applications/qt-labs-opencl/src' cd opencl/ && make -f Makefile make[2]: Entering directory `/home/skkard/Applications/qt-labs-opencl/src/opencl' make[2]: Nothing to be done for `first'. make[2]: Leaving directory `/home/skkard/Applications/qt-labs-opencl/src/opencl' cd openclgl/ && make -f Makefile make[2]: Entering directory `/home/skkard/Applications/qt-labs-opencl/src/openclgl' make[2]: Nothing to be done for `first'. make[2]: Leaving directory `/home/skkard/Applications/qt-labs-opencl/src/openclgl' make[1]: Leaving directory `/home/skkard/Applications/qt-labs-opencl/src' cd examples/ && make -f Makefile make[1]: Entering directory `/home/skkard/Applications/qt-labs-opencl/examples' cd opencl/ && make -f Makefile make[2]: Entering directory `/home/skkard/Applications/qt-labs-opencl/examples/opencl' cd vectoradd/ && make -f Makefile make[3]: Entering directory `/home/skkard/Applications/qt-labs-opencl/examples/opencl/vectoradd' g++ -o vectoradd vectoradd.o qrc_vectoradd.o -L/usr/lib -L../../../lib -L../../../bin -lQtOpenCL -lQtGui -lQtCore -lpthread ../../../lib/libQtOpenCL.so: undefined reference to `clBuildProgram' ../../../lib/libQtOpenCL.so: undefined reference to `clSetCommandQueueProperty' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueNDRangeKernel' ../../../lib/libQtOpenCL.so: undefined reference to `clSetKernelArg' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueCopyBufferToImage' ../../../lib/libQtOpenCL.so: undefined reference to `clReleaseMemObject' ../../../lib/libQtOpenCL.so: undefined reference to `clFinish' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueUnmapMemObject' ../../../lib/libQtOpenCL.so: undefined reference to `clGetMemObjectInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueReadImage' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueMarker' ../../../lib/libQtOpenCL.so: undefined reference to `clRetainCommandQueue' ../../../lib/libQtOpenCL.so: undefined reference to `clGetCommandQueueInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueCopyImage' ../../../lib/libQtOpenCL.so: undefined reference to `clReleaseContext' ../../../lib/libQtOpenCL.so: undefined reference to `clRetainMemObject' ../../../lib/libQtOpenCL.so: undefined reference to `clReleaseEvent' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueWriteBuffer' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueCopyBuffer' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueMapImage' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueReadBuffer' ../../../lib/libQtOpenCL.so: undefined reference to `clUnloadCompiler' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueBarrier' ../../../lib/libQtOpenCL.so: undefined reference to `clGetProgramBuildInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueWaitForEvents' ../../../lib/libQtOpenCL.so: undefined reference to `clRetainProgram' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateContext' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateImage3D' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueMapBuffer' ../../../lib/libQtOpenCL.so: undefined reference to `clGetDeviceIDs' ../../../lib/libQtOpenCL.so: undefined reference to `clGetContextInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clGetDeviceInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clReleaseCommandQueue' ../../../lib/libQtOpenCL.so: undefined reference to `clGetSamplerInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clGetPlatformIDs' ../../../lib/libQtOpenCL.so: undefined reference to `clGetSupportedImageFormats' ../../../lib/libQtOpenCL.so: undefined reference to `clGetPlatformInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clWaitForEvents' ../../../lib/libQtOpenCL.so: undefined reference to `clGetEventInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clGetEventProfilingInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clGetImageInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateProgramWithBinary' ../../../lib/libQtOpenCL.so: undefined reference to `clReleaseSampler' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateCommandQueue' ../../../lib/libQtOpenCL.so: undefined reference to `clGetKernelWorkGroupInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clRetainEvent' ../../../lib/libQtOpenCL.so: undefined reference to `clRetainContext' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateSampler' ../../../lib/libQtOpenCL.so: undefined reference to `clReleaseProgram' ../../../lib/libQtOpenCL.so: undefined reference to `clFlush' ../../../lib/libQtOpenCL.so: undefined reference to `clGetProgramInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateKernel' ../../../lib/libQtOpenCL.so: undefined reference to `clRetainKernel' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueWriteImage' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateBuffer' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateKernelsInProgram' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateProgramWithSource' ../../../lib/libQtOpenCL.so: undefined reference to `clReleaseKernel' ../../../lib/libQtOpenCL.so: undefined reference to `clRetainSampler' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateImage2D' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueCopyImageToBuffer' ../../../lib/libQtOpenCL.so: undefined reference to `clGetKernelInfo' collect2: ld returned 1 exit status make[3]: *** [vectoradd] Error 1 make[3]: Leaving directory `/home/skkard/Applications/qt-labs-opencl/examples/opencl/vectoradd' make[2]: *** [sub-vectoradd-make_default] Error 2 make[2]: Leaving directory `/home/skkard/Applications/qt-labs-opencl/examples/opencl' make[1]: *** [sub-opencl-make_default] Error 2 make[1]: Leaving directory `/home/skkard/Applications/qt-labs-opencl/examples' make: *** [sub-examples-make_default-ordered] Error 2 Tried it using the '-no-openclgl', but none of the examples etc are compiled. I'm using ubuntu 10.04 using the Qt which is installed from synaptic.

    Read the article

  • linker error when using tr1::regex

    - by Max
    Hello. I've got a program that uses tr1::regex, and while it compiles, it gives me very verbose linker errors. Here's my header file MapObject.hpp: #include <iostream> #include <string> #include <tr1/regex> #include "phBaseObject.hpp" using std::string; namespace phObject { class MapObject: public phBaseObject { private: string color; // must be a hex string represented as "#XXXXXX" static const std::tr1::regex colorRX; // enforces the rule above public: void setColor(const string&); (...) }; } Here's my implementation: #include <iostream> #include <string> #include <tr1/regex> #include "MapObject.hpp" using namespace std; namespace phObject { const tr1::regex MapObject::colorRX("#[a-fA-F0-9]{6}"); void MapObject::setColor(const string& c) { if(tr1::regex_match(c.begin(), c.end(), colorRX)) { color = c; } else cerr << "Invalid color assignment (" << c << ")" << endl; } (...) } and now for the errors: max@max-desktop:~/Desktop/Development/CppPartyHack/PartyHack/lib$ g++ -Wall -std=c++0x MapObject.cpp /tmp/cce5gojG.o: In function std::tr1::basic_regex<char, std::tr1::regex_traits<char> >::basic_regex(char const*, unsigned int)': MapObject.cpp:(.text._ZNSt3tr111basic_regexIcNS_12regex_traitsIcEEEC1EPKcj[std::tr1::basic_regex<char, std::tr1::regex_traits<char> >::basic_regex(char const*, unsigned int)]+0x61): undefined reference tostd::tr1::basic_regex ::_M_compile()' /tmp/cce5gojG.o: In function bool std::tr1::regex_match<__gnu_cxx::__normal_iterator<char const*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, char, std::tr1::regex_traits<char> >(__gnu_cxx::__normal_iterator<char const*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, __gnu_cxx::__normal_iterator<char const*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::tr1::basic_regex<char, std::tr1::regex_traits<char> > const&, std::bitset<11u>)': MapObject.cpp:(.text._ZNSt3tr111regex_matchIN9__gnu_cxx17__normal_iteratorIPKcSsEEcNS_12regex_traitsIcEEEEbT_S8_RKNS_11basic_regexIT0_T1_EESt6bitsetILj11EE[bool std::tr1::regex_match<__gnu_cxx::__normal_iterator<char const*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, char, std::tr1::regex_traits<char> >(__gnu_cxx::__normal_iterator<char const*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, __gnu_cxx::__normal_iterator<char const*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::tr1::basic_regex<char, std::tr1::regex_traits<char> > const&, std::bitset<11u>)]+0x53): undefined reference tobool std::tr1::regex_match<__gnu_cxx::__normal_iterator, std::allocator , std::allocator, std::allocator , char, std::tr1::regex_traits (__gnu_cxx::__normal_iterator, std::allocator , __gnu_cxx::__normal_iterator, std::allocator , std::tr1::match_results<__gnu_cxx::__normal_iterator, std::allocator , std::allocator, std::allocator &, std::tr1::basic_regex const&, std::bitset<11u)' collect2: ld returned 1 exit status I can't really make heads or tails of this, except for the undefined reference to std::tr1::basic_regex near the beginning. Anyone know what's going on?

    Read the article

  • BitBlt ignores CAPTUREBLT and seems to always capture a cached copy of the target...

    - by Jake Petroules
    I am trying to capture screenshots using the BitBlt function. However, every single time I capture a screenshot, the non-client area NEVER changes no matter what I do. It's as if it's getting some cached copy of it. The client area is captured correctly. If I close and then re-open the window, and take a screenshot, the non-client area will be captured as it is. Any subsequent captures after moving/resizing the window have no effect on the captured screenshot. Again, the client area will be correct. Furthermore, the CAPTUREBLT flag seems to do absolutely nothing at all. I notice no change with or without it. Here is my capture code: QPixmap WindowManagerUtils::grabWindow(WId windowId, GrabWindowFlags flags, int x, int y, int w, int h) { RECT r; switch (flags) { case WindowManagerUtils::GrabWindowRect: GetWindowRect(windowId, &r); break; case WindowManagerUtils::GrabClientRect: GetClientRect(windowId, &r); break; case WindowManagerUtils::GrabScreenWindow: GetWindowRect(windowId, &r); return QPixmap::grabWindow(QApplication::desktop()->winId(), r.left, r.top, r.right - r.left, r.bottom - r.top); case WindowManagerUtils::GrabScreenClient: GetClientRect(windowId, &r); return QPixmap::grabWindow(QApplication::desktop()->winId(), r.left, r.top, r.right - r.left, r.bottom - r.top); default: return QPixmap(); } if (w < 0) { w = r.right - r.left; } if (h < 0) { h = r.bottom - r.top; } #ifdef Q_WS_WINCE_WM if (qt_wince_is_pocket_pc()) { QWidget *widget = QWidget::find(winId); if (qobject_cast<QDesktopWidget*>(widget)) { RECT rect = {0,0,0,0}; AdjustWindowRectEx(&rect, WS_BORDER | WS_CAPTION, FALSE, 0); int magicNumber = qt_wince_is_high_dpi() ? 4 : 2; y += rect.top - magicNumber; } } #endif // Before we start creating objects, let's make CERTAIN of the following so we don't have a mess Q_ASSERT(flags == WindowManagerUtils::GrabWindowRect || flags == WindowManagerUtils::GrabClientRect); // Create and setup bitmap HDC display_dc = NULL; if (flags == WindowManagerUtils::GrabWindowRect) { display_dc = GetWindowDC(NULL); } else if (flags == WindowManagerUtils::GrabClientRect) { display_dc = GetDC(NULL); } HDC bitmap_dc = CreateCompatibleDC(display_dc); HBITMAP bitmap = CreateCompatibleBitmap(display_dc, w, h); HGDIOBJ null_bitmap = SelectObject(bitmap_dc, bitmap); // copy data HDC window_dc = NULL; if (flags == WindowManagerUtils::GrabWindowRect) { window_dc = GetWindowDC(windowId); } else if (flags == WindowManagerUtils::GrabClientRect) { window_dc = GetDC(windowId); } DWORD ropFlags = SRCCOPY; #ifndef Q_WS_WINCE ropFlags = ropFlags | CAPTUREBLT; #endif BitBlt(bitmap_dc, 0, 0, w, h, window_dc, x, y, ropFlags); // clean up all but bitmap ReleaseDC(windowId, window_dc); SelectObject(bitmap_dc, null_bitmap); DeleteDC(bitmap_dc); QPixmap pixmap = QPixmap::fromWinHBITMAP(bitmap); DeleteObject(bitmap); ReleaseDC(NULL, display_dc); return pixmap; } Most of this code comes from Qt's QWidget::grabWindow function, as I wanted to make some changes so it'd be more flexible. Qt's documentation states that: The grabWindow() function grabs pixels from the screen, not from the window, i.e. if there is another window partially or entirely over the one you grab, you get pixels from the overlying window, too. However, I experience the exact opposite... regardless of the CAPTUREBLT flag. I've tried everything I can think of... nothing works. Any ideas?

    Read the article

  • Running 32 bit assembly code on a 64 bit Linux & 64 bit Processor : Explain the anomaly.

    - by claws
    Hello, I'm in an interesting problem.I forgot I'm using 64bit machine & OS and wrote a 32 bit assembly code. I don't know how to write 64 bit code. This is the x86 32-bit assembly code for Gnu Assembler (AT&T syntax) on Linux. //hello.S #include <asm/unistd.h> #include <syscall.h> #define STDOUT 1 .data hellostr: .ascii "hello wolrd\n"; helloend: .text .globl _start _start: movl $(SYS_write) , %eax //ssize_t write(int fd, const void *buf, size_t count); movl $(STDOUT) , %ebx movl $hellostr , %ecx movl $(helloend-hellostr) , %edx int $0x80 movl $(SYS_exit), %eax //void _exit(int status); xorl %ebx, %ebx int $0x80 ret Now, This code should run fine on a 32bit processor & 32 bit OS right? As we know 64 bit processors are backward compatible with 32 bit processors. So, that also wouldn't be a problem. The problem arises because of differences in system calls & call mechanism in 64-bit OS & 32-bit OS. I don't know why but they changed the system call numbers between 32-bit linux & 64-bit linux. asm/unistd_32.h defines: #define __NR_write 4 #define __NR_exit 1 asm/unistd_64.h defines: #define __NR_write 1 #define __NR_exit 60 Anyway using Macros instead of direct numbers is paid off. Its ensuring correct system call numbers. when I assemble & link & run the program. $cpp hello.S hello.s //pre-processor $as hello.s -o hello.o //assemble $ld hello.o // linker : converting relocatable to executable Its not printing helloworld. In gdb its showing: Program exited with code 01. I don't know how to debug in gdb. using tutorial I tried to debug it and execute instruction by instruction checking registers at each step. its always showing me "program exited with 01". It would be great if some on could show me how to debug this. (gdb) break _start Note: breakpoint -10 also set at pc 0x4000b0. Breakpoint 8 at 0x4000b0 (gdb) start Function "main" not defined. Make breakpoint pending on future shared library load? (y or [n]) y Temporary breakpoint 9 (main) pending. Starting program: /home/claws/helloworld Program exited with code 01. (gdb) info breakpoints Num Type Disp Enb Address What 8 breakpoint keep y 0x00000000004000b0 <_start> 9 breakpoint del y <PENDING> main I tried running strace. This is its output: execve("./helloworld", ["./helloworld"], [/* 39 vars */]) = 0 write(0, NULL, 12 <unfinished ... exit status 1> Explain the parameters of write(0, NULL, 12) system call in the output of strace? What exactly is happening? I want to know the reason why exactly its exiting with exitstatus=1? Can some one please show me how to debug this program using gdb? Why did they change the system call numbers? Kindly change this program appropriately so that it can run correctly on this machine. EDIT: After reading Paul R's answer. I checked my files claws@claws-desktop:~$ file ./hello.o ./hello.o: ELF 64-bit LSB relocatable, x86-64, version 1 (SYSV), not stripped claws@claws-desktop:~$ file ./hello ./hello: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, not stripped All of my questions still hold true. What exactly is happening in this case? Can someone please answer my questions and provide an x86-64 version of this code?

    Read the article

  • how to redirect terminal contents to a jtextpane?

    - by sonu thomas
    hi.. I was trying to run a java class file using java code.The aim was to direct the executing sequence of the terminal of fedora 10 into a frame with a textpane. My code is: import java.io.DataInputStream; import java.io.IOException; import javax.swing.JDialog; import javax.swing.JFrame; import javax.swing.JOptionPane; import javax.swing.JPanel; import javax.swing.JTextArea; import javax.swing.JTextPane; //import sun.reflect.ReflectionFactory.GetReflectionFactoryAction; public class file { /** * @param args */ /** * @param args * @throws IOException */ public static void main(String[] args) throws IOException{ // TODO Auto-generated method stub JDialog jj = new JDialog(); jj.setTitle("helllo"); String sssub1,sssub2,sssub3; String ss="C:/Users/sonu/Desktop/ourIDE/src/src/nn.java"; JPanel n = new JPanel(); jj.setContentPane(n); JTextPane tpn2=new JTextPane(); jj.getContentPane().add(tpn2); jj.setVisible(true); Runtime runtime; Process process; if(ss.indexOf(" ")==-1) { try { runtime= Runtime.getRuntime(); sssub1="/home/ss/Desktop/src/"; sssub2="nn"; process=runtime.exec("sh jrun.sh "+sssub1+" "+sssub2); DataInputStream data=new DataInputStream(process.getInputStream()); DataInputStream data_data=new DataInputStream(process.getErrorStream()); String s="",t=""; int ch; while((ch=data.read())!=-1){ s=s+(char)ch; } data.close(); while((ch=data_data.read())!=-1){ t=t+(char)ch; } data_data.close(); if(t.equals("")) { s+="\nNormal Termination."; tpn2.setText(s); } else tpn2.setText(t); }catch(Exception e){ System.out.println("Error executing file==>"+e);} } } } The content of **jrun.sh** is cd $1 java $2 When the content of **nn.java** was this: import java.io.*; class nn { public static void main(String[] ar)throws IOException { int i=90; BufferedReader dt = new BufferedReader(new InputStreamReader(System.in)); System.out.println("Enter"); //i=Integer.parseInt(dt.readLine()); line--notable System.out.println("i="+i); } } It worked smoothly But when I remove the comment on line--notable,it gave me Textpane with no content. The problem is : I cant read an input from nn.java Kindly give me a solution... If i am able to: get the terminal pop up with executing the nn.class,and i am able to enter the input,then it will do... Thanks in advance...

    Read the article

  • Tridion New UI Preview Site is not reflecting with the changes unless pulished

    - by Ram G
    I have new UI setup and noticing that when ever I update a page it is not refreshing with the updated changes. I do not see either the page_{sessionId/GUID}.aspx created either. Checked the session preview DB and I see the changes in PAGE_CONTENT table with new rendered content, so seems like session preview is working fine but the Preview site is not able to get the changes and refresh the UI. I have checked all the preview handlers and mappings for .aspx and made sure they are correct in web.config. Any thoughts on why the preview site not showing up the changes? I have the session preview DB setup in cd_storage_conf.xml. <StorageBindings> <Bundle src="preview_dao_bundle.xml"/> </StorageBindings> <Wrappers> <Wrapper Name="SessionWrapper"> <Timeout>120000</Timeout> <Storage Type="persistence" Id="db-session-webservice" dialect="MSSQL" Class="com.tridion.storage.persistence.JPADAOFactory"> <Pool Type="jdbc" Size="5" MonitorInterval="60" IdleTimeout="120" CheckoutTimeout="120" /> <DataSource Class="com.microsoft.sqlserver.jdbc.SQLServerDataSource"> <Property Name="serverName" Value="localhost" /> <Property Name="portNumber" Value="1433" /> <Property Name="databaseName" Value="Tridion_Broker_SessionPreview" /> <Property Name="user" Value="usr" /> <Property Name="password" Value="pwd" /> </DataSource> </Storage> </Wrapper> </Wrappers> web.config (handlers): <add verb="GET" path="*.htm" type="Tridion.ContentDelivery.Preview.Web.StaticFileHandler" /> <add verb="GET" path="*.jpg" type="Tridion.ContentDelivery.Preview.Web.StaticFileHandler" /> <add verb="GET" path="*.png" type="Tridion.ContentDelivery.Preview.Web.StaticFileHandler" /> <add verb="GET" path="*.aspx" type="Tridion.ContentDelivery.Preview.Web.StaticFileHandler" /> <add verb="GET" path="*.html" type="Tridion.ContentDelivery.Preview.Web.StaticFileHandler" /> <add name="Tridion.ContentDelivery.Preview.Web.PreviewContentModule" type="Tridion.ContentDelivery.Preview.Web.PreviewContentModule" /> Log (timestamp and DEBUG prefix removed): ClaimStore - put: uri=taf:session:id, value=tridion_db59279b-7d37-4b2e-ad98-eaaa6af7038e ClaimStore - put: uri=taf:session:id, value=tridion_db59279b-7d37-4b2e-ad98-eaaa6af7038e ClaimStore - put: uri=taf:tracking:id, value=tridion_d1fa1017-a28d-4f48-a790-b74f78c69314 ClaimStore - put: uri=taf:tracking:id, value=tridion_d1fa1017-a28d-4f48-a790-b74f78c69314 SearchClaimProcessor - No match found for referrer string http://uidemo.practice.com/en/Product/musk.aspx SearchClaimProcessor - No match found for referrer string http://uidemo.practice.com/en/Product/musk.aspx ClaimStore - put: uri=taf:claim:ambientdata:footprintcartridge:devicetype, value=Desktop ClaimStore - put: uri=taf:claim:ambientdata:footprintcartridge:devicetype, value=Desktop ClaimStore - put: uri=taf:claim:ambientdata:footprintcartridge:mobiledevice, value=NotMobile ClaimStore - put: uri=taf:claim:ambientdata:footprintcartridge:acceptlanguage, value=en-US ClaimStore - put: uri=taf:claim:ambientdata:footprintcartridge:mobiledevice, value=NotMobile ClaimStore - put: uri=taf:claim:ambientdata:footprintcartridge:acceptlanguage, value=en-US PageHandler - The session wrappers are correctly installed. Any thoughts/pointers on what might be going wrong...? (sorry for the long post)

    Read the article

  • Asp.net mvc application deployment / security issues

    - by WestDiscGolf
    I'll start with appologies; I wasn't sure if this was best posted here of Server Fault so if its in the wrong place then please move :-) Basic information I have written the first module of a new application at work. This is written using Visual Studio 2010, targetting .net 3.5 (at the moment) and asp.net mvc 2. This has been working fine during development running on the built in Development server from VS but however does not work once deployed to IIS 7/7.5. To deploy the application, I have built it in release mode and created a deployment package by right clicking on the project in the solution explorer (this will be done with an automated build in tfs once upgrade from the beta). This has then been imported into IIS on the server. The application is using windows/domain authentication. Issue #1 I can fire up internet explorer and browse to the application from a client computer as well as on a remote desktop connection. I can execute the code which reads/stores data in Session fine from the IE instance on the remote desktop but if I browse to it from the client pc it seems to lose the session state. I click on the form submit and the page refreshes and doesn't execute the required code. I've tried setting with; InProc, SQLServer and StateServer. but with no luck :-( Issue #2 As part of the application it views PDF and Tiff documents on the fly which are on a network share on the office network and creates thumbnails if the document hasn't been viewed before. This works if running on the machine the application is deployed to; however when browsing from a client pc I get an error saying: Access to the path '\\fileserver\folder\file.tif' is denied Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.UnauthorizedAccessException: Access to the path '\\fileserver\folder\file.TIF' is denied. ASP.NET is not authorized to access the requested resource. Consider granting access rights to the resource to the ASP.NET request identity. ASP.NET has a base process identity (typically {MACHINE}\ASPNET on IIS 5 or Network Service on IIS 6) that is used if the application is not impersonating. If the application is impersonating via , the identity will be the anonymous user (typically IUSR_MACHINENAME) or the authenticated request user. As this is on a different server the user is not accessible. To get round this I have tried: 1 - setting the application pool to run as domain administrator (I know this is a security risk, but I'm just trying to get it to work at the moment!) 2 - to set the log on account for World Wide Web Publishing service to be the domain admin . When trying to restart the service I get ... Windows could not start the World Wide Web Publishing Service service on the Local Computer. Error 1079: The account specified for this service is different from the account specified fro the other services running in the same process. Any pointers/help would be much appriciated as I'm pulling my hair out (of what little I have left). Update I've been using this funky little tool I found - DelegConfig v2 beta (Delegation / Kerberos Configuration Tool). This has been really usefull. So I've got the accessing of the file share working (there is a test page which will read the files) so now I've just got the issue of passing through the users credentials through to the SQL Server (wans't my choice to do it this way!!) to execute the queries etc. but I can't get it to log on as the user. It tries to access it as "NT Authority\Network Service" which doesn't have a sql login (as should be the logged on user). My connection string is: <add name="User" connectionString="Data Source=.;Integrated Security=True" providerName="System.Data.SqlClient" /> No initial catalog is specified as the system is over multiple dbs (also wasn't my choice!!). I really appriciate all the help so far! :-) Any further hints?!

    Read the article

  • add the same qtreewidgetitems into the second column

    - by srinu
    hello i am using the following program to display the qtreewidget. main.cpp include include "qdomsimple.h" include include "qdomsimple.h" int main(int argc, char *argv[]) { QApplication a(argc, argv); QStringList filelist; filelist.push_back("C:\department1.xml"); filelist.push_back("C:\department2.xml"); filelist.push_back("C:\department3.xml"); QDOMSimple w(filelist); w.resize(260,200); w.show(); return a.exec(); } qdomsimple.cpp include "qdomsimple.h" include include QDOMSimple::QDOMSimple(QStringList strlst,QWidget *parent) : QWidget(parent) { k=0; // DOM document QDomDocument doc("title"); QStringList headerlabels; headerlabels.push_back("Chemistry"); headerlabels.push_back("Mechanical"); headerlabels.push_back("IT"); m_tree = new QTreeWidget( this ); m_tree->setColumnCount(3); m_tree->setHeaderLabels(headerlabels); QStringList::iterator it; for(it=strlst.begin();it<strlst.end();it++) { QFile file(*it); if ( file.open( QIODevice::ReadOnly | QIODevice::Text )) { // The tree view to be filled with xml data // (m_tree is a class member variable) // Creating the DOM tree doc.setContent( &file ); file.close(); // Root of the document QDomElement root = doc.documentElement(); // Taking the first child node of the root QDomNode child = root.firstChild(); // Setting the root as the header of the tree //QTreeWidgetItem* header = new QTreeWidgetItem; //header->setText(k,root.nodeName()); //m_tree->setHeaderItem(header); // Parse until the end of document while (!child.isNull()) { //Convert a DOM node to DOM element QDomElement element = child.toElement(); //Parse only if the node is a really an element if (!element.isNull()) { //Parse the element recursively parseElement( element,0); //Go to the next sibling child = child.nextSiblingElement(); } } //m_tree->setGeometry( QApplication::desktop()->availableGeometry() ); //setGeometry( QApplication::desktop()->availableGeometry() ); } k++; } } void QDOMSimple::parseElement( QDomElement& aElement, QTreeWidgetItem *aParentItem ) { // A map of all attributes of an element QDomNamedNodeMap attrMap = aElement.attributes(); // List all attributes QStringList attrList; for ( int i = 0; i < attrMap.count(); i++ ) { // Attribute name //QString attr = attrMap.item( i ).nodeName(); //attr.append( "-" ); /* Attribute value QString attr; attr.append( attrMap.item( i ).nodeValue() );*/ //attrList.append( attr ); attrList.append(attrMap.item( i).nodeValue()); } QTreeWidgetItem* item; // Create a new view item for elements having child nodes if (aParentItem) { item = new QTreeWidgetItem(aParentItem); } // Create a new view item for elements without child nodes else { item = new QTreeWidgetItem( m_tree ); } //Set tag name and the text QString tagNText; tagNText.append( aElement.tagName() ); //tagNText.append( "------" ); //tagNText.append( aElement.text() ); item->setText(0, tagNText ); // Append attributes to the element node of the tree for ( int i = 0; i < attrList.count(); i++ ) { QTreeWidgetItem* attrItem = new QTreeWidgetItem( item ); attrItem->setText(0, attrList[i] ); } // Repeat the process recursively for child elements QDomElement child = aElement.firstChildElement(); while (!child.isNull()) { parseElement( child, item ); child = child.nextSiblingElement(); } } QDOMSimple::~QDOMSimple() { } for this i got the qtreewidget like this +file1 +file2 +file3 but actual wanted output is +file1 +file2 +file3 i don't know how to do it.Thanks in advance

    Read the article

  • Android: HttpURLConnection not working properly

    - by giorgiline
    I'm trying to get the cookies from a website after sending user credentials through a POST Request an it seems that it doesn't work in android this way. ¿Am I doing something bad?. Please help. I've searched here in different posts but there's no useful answer. It's curious that this run in a desktop Java implementation it works perfect but it crashes in Android platform. And it is exactly the same code, specifically when calling HttpURLConnection.getHeaderFields(), it also happens with other member methods. It's a simple code and I don't know why the hell isn't working. DESKTOP CODE: This goes just in the main() HttpURLConnection connection = null; OutputStream out = null; try { URL url = new URL("http://www.XXXXXXXX.php"); String charset = "UTF-8"; String postback = "1"; String user = "XXXXXXXXX"; String password = "XXXXXXXX"; String rememberme = "on"; String query = String.format("postback=%s&user=%s&password=%s&rememberme=%s" , URLEncoder.encode(postback, charset) , URLEncoder.encode(user,charset) , URLEncoder.encode(password, charset) , URLEncoder.encode(rememberme, charset)); connection = (HttpURLConnection)url.openConnection(); connection.setRequestMethod("POST"); connection.setRequestProperty("Accept-Charset", charset); connection.setDoOutput(true); connection.setFixedLengthStreamingMode(query.length()); out = connection.getOutputStream (); out.write(query.getBytes(charset)); if (connection.getHeaderFields() == null){ System.out.println("Header null"); }else{ for (String cookie: connection.getHeaderFields().get("Set-Cookie")){ System.out.println(cookie.split(";", 2)[0]); } } } catch (IOException e){ e.printStackTrace(); } finally { try { out.close();} catch (IOException e) { e.printStackTrace();} connection.disconnect(); } So the output is: login_key=20ad8177db4eca3f057c14a64bafc2c9 FASID=cabf20cc471fcacacdc7dc7e83768880 track=30c8183e4ebbe8b3a57b583166326c77 client-data=%7B%22ism%22%3Afalse%2C%22showm%22%3Afalse%2C%22ts%22%3A1349189669%7D ANDROID CODE: This goes inside doInBackground AsyncTask body HttpURLConnection connection = null; OutputStream out = null; try { URL url = new URL("http://www.XXXXXXXXXXXXXX.php"); String charset = "UTF-8"; String postback = "1"; String user = "XXXXXXXXX"; String password = "XXXXXXXX"; String rememberme = "on"; String query = String.format("postback=%s&user=%s&password=%s&rememberme=%s" , URLEncoder.encode(postback, charset) , URLEncoder.encode(user,charset) , URLEncoder.encode(password, charset) , URLEncoder.encode(rememberme, charset)); connection = (HttpURLConnection)url.openConnection(); connection.setRequestMethod("POST"); connection.setRequestProperty("Accept-Charset", charset); connection.setDoOutput(true); connection.setFixedLengthStreamingMode(query.length()); out = connection.getOutputStream (); out.write(query.getBytes(charset)); if (connection.getHeaderFields() == null){ Log.v(TAG, "Header null"); }else{ for (String cookie: connection.getHeaderFields().get("Set-Cookie")){ Log.v(TAG, cookie.split(";", 2)[0]); } } } catch (IOException e){ e.printStackTrace(); } finally { try { out.close();} catch (IOException e) { e.printStackTrace();} connection.disconnect(); } And here there is no output, it seems that connection.getHeaderFields() doesn't return result. It takes al least 30 seconds to show the Log: 10-02 16:56:25.918: V/class com.giorgi.myproject.activities.HomeActivity(2596): Header null

    Read the article

  • How can I get SQL Server transactions to use record-level locks?

    - by Joe White
    We have an application that was originally written as a desktop app, lo these many years ago. It starts a transaction whenever you open an edit screen, and commits if you click OK, or rolls back if you click Cancel. This worked okay for a desktop app, but now we're trying to move to ADO.NET and SQL Server, and the long-running transactions are problematic. I found that we'll have a problem when multiple users are all trying to edit (different subsets of) the same table at the same time. In our old database, each user's transaction would acquire record-level locks to every record they modified during their transaction; since different users were editing different records, everyone gets their own locks and everything works. But in SQL Server, as soon as one user edits a record inside a transaction, SQL Server appears to get a lock on the entire table. When a second user tries to edit a different record in the same table, the second user's app simply locks up, because the SqlConnection blocks until the first user either commits or rolls back. I'm aware that long-running transactions are bad, and I know that the best solution would be to change these screens so that they no longer keep transactions open for a long time. But since that would mean some invasive and risky changes, I also want to research whether there's a way to get this code up and running as-is, just so I know what my options are. How can I get two different users' transactions in SQL Server to lock individual records instead of the entire table? Here's a quick-and-dirty console app that illustrates the issue. I've created a database called "test1", with one table called "Values" that just has ID (int) and Value (nvarchar) columns. If you run the app, it asks for an ID to modify, starts a transaction, modifies that record, and then leaves the transaction open until you press ENTER. I want to be able to start the program and tell it to update ID 1; let it get its transaction and modify the record; start a second copy of the program and tell it to update ID 2; have it able to update (and commit) while the first app's transaction is still open. Currently it freezes at step 4, until I go back to the first copy of the app and close it or press ENTER so it commits. The call to command.ExecuteNonQuery blocks until the first connection is closed. public static void Main() { Console.Write("ID to update: "); var id = int.Parse(Console.ReadLine()); Console.WriteLine("Starting transaction"); using (var scope = new TransactionScope()) using (var connection = new SqlConnection(@"Data Source=localhost\sqlexpress;Initial Catalog=test1;Integrated Security=True")) { connection.Open(); var command = connection.CreateCommand(); command.CommandText = "UPDATE [Values] SET Value = 'Value' WHERE ID = " + id; Console.WriteLine("Updating record"); command.ExecuteNonQuery(); Console.Write("Press ENTER to end transaction: "); Console.ReadLine(); scope.Complete(); } } Here are some things I've already tried, with no change in behavior: Changing the transaction isolation level to "read uncommitted" Specifying a "WITH (ROWLOCK)" on the UPDATE statement

    Read the article

  • Difficulty analyzing text from a file

    - by Nikko
    I'm running into a rather amusing error with my output on this lab and I was wondering if any of you might be able to hint at where my problem lies. The goal is find the high, low, average, sum of the record, and output original record. I started with a rather basic program to solve for one record and when I achieved this I expanded the program to work with the entire text file. Initially the program would correctly output: 346 130 982 90 656 117 595 High# Low# Sum# Average# When I expanded it to work for the entire record my output stopped working how I had wanted it to. 0 0 0 0 0 0 0 High: 0 Low: 0 Sum: 0 Average: 0 0 0 0 0 0 0 0 High: 0 Low: 0 Sum: 0 Average: 0 etc... I cant quite figure out why my ifstream just completely stopped bothering to input the values from file. I'll go take a walk and take another crack at it. If that doesn't work I'll be back here to check for any responses =) Thank you! #include <iostream> #include <fstream> #include <iomanip> #include <string> using namespace std; int main() { int num; int high = 0; int low = 1000; double average = 0; double sum = 0; int numcount = 0; int lines = 1; char endoline; ifstream inData; ofstream outData; inData.open("c:\\Users\\Nikko\\Desktop\\record5ain.txt"); outData.open("c:\\Users\\Nikko\\Desktop\\record5aout.txt"); if(!inData) //Reminds me to change path names when working on different computers. { cout << "Could not open file, program will exit" << endl; exit(1); } while(inData.get(endoline)) { if(endoline == '\n') lines++; } for(int A = 0; A < lines; A++) { for(int B = 0; B < 7; B++) { while(inData >> num) inData >> num; numcount++; sum += num; if(num < low) low = num; if(num > high) high = num; average = sum / numcount; outData << num << '\t'; } outData << "High: " << high << " " << "Low: " << low << " " << "Sum: " << sum << " " << "Average: " << average << endl; } inData.close(); outData.close(); return(0); }

    Read the article

  • C# Object Problem - Can't Solve It

    - by user612041
    I'm getting the error 'Object reference not set to an instance of an object'. I've tried looking at similar problems but genuinely cannot see what the problem is with my program. The line of code that I am having an error with is: labelQuestion.Text = table.Rows[0]["Question"].ToString(); Here is my code in its entirety: using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using System.Data.OleDb; using System.Data.Sql; using System.Data.SqlClient; namespace Quiz_Test { public partial class Form1 : Form { public Form1() { InitializeComponent(); } String chosenAnswer, correctAnswer; DataTable table; private void Form1_Load(object sender, EventArgs e) { //declare connection string using windows security string cnString = "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=C:\\Users\\Hannah\\Desktop\\QuizQuestions.accdb"; //declare Connection, command and other related objects OleDbConnection conGet = new OleDbConnection(cnString); OleDbCommand cmdGet = new OleDbCommand(); //try //{ //open connection conGet.Open(); //String correctAnswer; cmdGet.CommandType = CommandType.Text; cmdGet.Connection = conGet; cmdGet.CommandText = "SELECT * FROM QuizQuestions ORDER BY rnd()"; OleDbDataReader reader = cmdGet.ExecuteReader(); reader.Read(); labelQuestion.Text = table.Rows[0]["Question"].ToString(); radioButton1.Text = table.Rows[0]["Answer 1"].ToString(); radioButton2.Text = table.Rows[0]["Answer 2"].ToString(); radioButton3.Text = table.Rows[0]["Answer 3"].ToString(); radioButton4.Text = table.Rows[0]["Answer 4"].ToString(); correctAnswer = table.Rows[0]["Correct Answer"].ToString(); ; conGet.Close(); } private void btnSelect_Click(object sender, EventArgs e) { String cnString = "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=C:\\Users\\Hannah\\Desktop\\QuizQuestions.accdb"; //declare Connection, command and other related objects OleDbConnection conGet = new OleDbConnection(cnString); OleDbCommand cmdGet = new OleDbCommand(); //try { //open connection conGet.Open(); cmdGet.CommandType = CommandType.Text; cmdGet.Connection = conGet; cmdGet.CommandText = "SELECT * FROM QuizQuestions ORDER BY rnd()"; // select all columns in all rows OleDbDataReader reader = cmdGet.ExecuteReader(); reader.Read(); if (radioButton1.Checked) { chosenAnswer = reader["Answer 1"].ToString(); } else if (radioButton2.Checked) { chosenAnswer = reader["Answer 2"].ToString(); } else if (radioButton3.Checked) { chosenAnswer = reader["Answer 3"].ToString(); } else { chosenAnswer = reader["Answer 4"].ToString(); } if (chosenAnswer == reader["Correct Answer"].ToString()) { //chosenCorrectly++; MessageBox.Show("You have got this answer correct"); //label2.Text = "You have got " + chosenCorrectly + " answers correct"; } else { MessageBox.Show("That is not the correct answer"); } } } } } I realise the problem isn't too big but I can't see how my declaration timings are wrong

    Read the article

  • Calling fuctions of header file

    - by navinbecse
    i want to know the way i am calling the methods defined in windows.h from c++ program is correct or wrong.. when i tried to get the library files out of it, it is showing the errors saying that unresolved methods... i am using this for jni purpose... #define _WIN32_WINNT 0x0500 #include "keylogs.h" #include <fstream #include <windows.h using namespace std; ofstream out("C:\Users\402100\Desktop\keyloggerfile.txt", ios::out); LRESULT CALLBACK keyboardHookProc(int nCode, WPARAM wParam, LPARAM lParam) { PKBDLLHOOKSTRUCT p = (PKBDLLHOOKSTRUCT) (lParam); // If key is being pressed if (wParam == WM_KEYDOWN) { switch (p-vkCode) { // Invisible keys case VK_CAPITAL: out << ""; break; case VK_SHIFT: out << ""; break; case VK_LCONTROL: out << ""; break; case VK_RCONTROL: out << ""; break; case VK_INSERT: out << ""; break; case VK_END: out << ""; break; case VK_PRINT: out << ""; break; case VK_DELETE: out << ""; break; case VK_BACK: out << ""; break; case VK_SPACE: out << ""; break; case VK_RMENU: out << ""; break; case VK_LMENU: out << ""; break; case VK_LEFT: out << ""; break; case VK_RIGHT: out << ""; break; case VK_UP: out << ""; break; case VK_DOWN: out << ""; break; // Visible keys default: out << char(tolower(p->vkCode)); } } return CallNextHookEx(NULL, nCode, wParam, lParam); } JNIEXPORT void JNICALL Java_keylog_Main_CallerMethod(JNIEnv *env, jobject) { HINSTANCE hInstance=GetModuleHandle(0); HHOOK keyboardHook = SetWindowsHookEx( WH_KEYBOARD_LL, keyboardHookProc, hInstance, 0); MessageBox(NULL, "Press OK to stop logging.", "Information", MB_OK); out.close(); } int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nShowCmd) { // Set windows hook return 0; } The errors which i am getting is... file.obj : error LNK2019: unresolved external symbol _imp_CallNextHookEx@16 re ferenced in function "long __stdcall keyboardHookProc(int,unsigned int,long)" (? keyboardHookProc@@YGJHIJ@Z) file.obj : error LNK2019: unresolved external symbol _imp_MessageBoxA@16 refer enced in function "class std::basic_string,cl ass std::allocator __cdecl CallerMethod(void)" (?CallerMethod@@YA?AV?$ba sic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@XZ) file.obj : error LNK2019: unresolved external symbol _imp_SetWindowsHookExA@16 referenced in function "class std::basic_string,class std::allocator __cdecl CallerMethod(void)" (?CallerMethod@@YA? AV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@XZ) c:\Users\402100\Desktop\oldfile.dll : fatal error LNK1120: 3 unresolved externals can any one help me please....

    Read the article

  • Adobe Photoshop Vs Lightroom Vs Aperture

    - by Aditi
    Adobe Photoshop is the standard choice for photographers, graphic artists and Web designers. Adobe Photoshop Lightroom  & Apple’s Aperture are also in the same league but the usage is vastly different. Although Photoshop is most popular & widely used by photographers, but in many ways it’s less relevant to photographers than ever before. As Lightroom & Aperture is aimed squarely at photographers for photo-processing. With this write up we are going to help you choose what is right for you and why. Adobe Photoshop Adobe Photoshop is the most liked tool for the detailed photo editing & designing work. Photoshop provides great features for rollover and Image slicing. Adobe Photoshop includes comprehensive optimization features for producing the highest quality Web graphics with the smallest possible file sizes. You can also create startling animations with it. Designers & Editors know how important precise masking is, PhotoShop lets you do that with various detailing tools. Art history brush, contact sheets, and history palette are some of the smart features, which add to its viability. Download Whether you’re producing printed pages or moving images, you can work more efficiently and produce better results because of its smooth integration across other adobe applications. Buy supporting layer effects, it allows you to quickly add drop shadows, inner and outer glows, bevels, and embossing to layers. It also provides Seamless Web Graphics Workflow. Photoshop is hands-down the BEST for editing. Photoshop Cons: • Slower, less precise editing features in Bridge • Processing lots of images requires actions and can be slower than exporting images from Lightroom • Much slower with editing and processing a large number of images Aperture Apple Aperture is aimed at the professional photographer who shoots predominantly raw files. It helps them to manage their workflow and perform their initial Raw conversion in a better way. Aperture provides adjustment tools such as Histogram to modify color and white balance, but most of the editing of photos is left for Photoshop. It gives users the option of seeing their photographs laid out like slides or negatives on a light table. It boasts of – stars, color-coding and easy techniques for filtering and picking images. Aperture has moved forward few steps than Photoshop, but most of the editing work has been left for Photoshop as it features seamless Photoshop integration. Aperture Pros: Aperture is a step up from the iPhoto software that comes with every Mac, and fairly easy to learn. Adjustments are made in a logical order from top to bottom of the menu. You can store the images in a library or any folder you choose. Aperture also works really well with direct Canon files. It is just $79 if you buy it through Apple’s App Store Moving forward, it will run on the iPad, and possibly the iPhone – Adobe products like Lightroom and Photoshop may never offer these options It is much nicer and simpler user interface. Lightroom Lightroom does a smashing job of basic fixing and editing. It is more advanced tool for photographers. They can use it to have a startling photography effect. Light room has many advanced features, which makes it one of the best tools for photographers and far ahead of the other two. They are Nondestructive editing. Nothing is actually changed in an image until the photo is exported. Better controls over organizing your photos. Lightroom helps to gather a group of photos to use in a slideshow. Lightroom has larger Compare and Survey views of images. Quickly customizable interface. Simple keystrokes allow you to perform different All Lightroom controls are kept available in panels right next to the photos. Always-available History palette, it doesn’t go when you close lightroom. You gain more colors to work with compared to Photoshop and with more precise control. Local control, or adjusting small parts of a photo without affecting anything else, has long been an important part of photography. In Lightroom 2, you can darken, lighten, and affect color and change sharpness and other aspects of specific areas in the photo simply by brushing your cursor across the areas. Photoshop has far more power in its Cloning and Healing Brush tools than Lightroom, but Lightroom offers simple cloning and healing that’s nondestructive. Lightroom supports the RAW formats of more cameras than Aperture. Lightroom provides the option of storing images outside the application in the file system. It costs less than photoshop. Download Why PhotoShop is advanced than Lightroom? There are countless image processing plug-ins on the market for doing specialized processing in Photoshop. For example, if your image needs sophisticated noise reduction, you can use the Noiseware plug-in with Photoshop to do a much better job or noise removal than Lightroom can do. Lightroom’s advantages over Aperture 3 Will always have better integration with Photoshop. Lightroom is backed by bigger and more active user community (So abundant availability for tutorials, etc.) Better noise reduction tool. Especially for photographers the Lens-distortion correction tool  is perfect Lightroom Cons: • Have to Import images to work on them • Slows down with over 10,000 images in the catalog • For processing just one or two images this is a slower workflow Photoshop Pros: • ACR has the same RAW processing controls as Lightroom • ACR Histogram is specialized to the chosen color space (Lightroom is locked into ProPhoto RGB color space with an sRGB tone curve) • Don’t have to Import images to open in Bridge or ACR • Ability to customize processing of RAW images with Photoshop Actions Pricing and Availability Get LightRoomGet PhotoShop Latest version Of Photoshop can be purchased from Adobe store and Adobe authorized reseller and it costs US$999. Latest version of Aperture can be bought for US$199 from Apple Online store or Mac App Store. You can buy latest version of LightRoom from Adobe Store or Adobe Authorized reseller for US$299. Related posts:Adobe Photoshop CS5 vs Photoshop CS5 extended Web based Alternatives to Photoshop 10 Free Alternatives for Adobe Photoshop Software

    Read the article

  • Feedback on "market manipulation", a peripheral game mechanic for a satirical MMO

    - by BerndBrot
    This question asks for feedback on a specific game-mechanic. Since there is not one right feedback on a game mechanic, I tried to provide enough context and guidelines to still make it possible for users to rate answers and to accept an answer as the best answer (following these criteria from Writer.SE's meta website). Please comment if you have any suggestions on how I could improve the question in that regard. So, let's begin with the game itself and some of its elements which are relevant for this question. Context I'm working on a satirical, text-based multiplayer adventure and role-playing game set in modern-day London. The game resolves around the concept of sin and features a myriad of (venomous) allusions to all the things that go wrong in this world. Players can choose between character classes like bullshit artist (consultant), bankster, lawyer, mobster, celebrity, politician, etc. In order to complete the game, the player has to live so sinfully with regard to any of the seven deadly sins that a demon is willing to offer them a contract of sponsorship. On their quest to live a sinful live, characters explore more and more locations of modern-day London (on a GoogleMap), fight "monsters" like insurance sales agents or Jehovah's Witnesses, and complete quests, like building a PowerPoint presentation out of marketing buzz words or keeping up a number of substance abuse effects in order to progress on the gluttony path. Battles are turn based with both combatants having a deck of cards, with which they try to make their enemy give in to temptations of all sorts. Tempted enemies sometimes become contacts (an item drop mechanic), which can be exploited for various benefits, depending on their area of influence (finance, underworld, bureaucracy, etc.), level of influence, and kind of sway that the player has over them (bribed, seduced, threatened, etc.) Once a contract has been exploited, the player loses that contact. Most actions require turns. Turns are limited, but refill each day. Criteria A number of peripheral game mechanics are supposed to represent real world abuses and mischief in a humorous way integrate real world data and events to strengthen the feeling of relevance of the game's humor with regard to real world problems add fun ways of interacting with other players add ways for players to express themselves through game-play Market manipulation is one such peripheral game mechanic and should fulfill all of these goals. Market manipulation This is my initial design of the mechanic: Players can enter the London Stock Exchange (LSE) (without paying a turn) LSE displays the stock prices of a number of companies in industries like weapons or tobacco as well as some derivatives based on wheat and corn. The stock prices are calculated based on the actual stock prices of these companies and derivatives (in real time) any market manipulations that were conducted by the players any market corrections of the system Players can buy and sell shares with cash, a resource in the game, at current in-game market value (without paying a turn). Players can manipulate the market, i.e. let the price of a share either rise or fall, by some amount, over a certain period of time. Manipulating the market requires 1 turn A contact in the financial sector (see above). The higher the level of influence of the contact, the stronger the effect of the manipulation on the stock price, and/or the shorter it takes for the manipulation to manifest itself. Market manipulation also adds a crime to the player's record. (There are a multitude of ways to take care of that, but it is still another "cost" of market manipulations.) The system continuously corrects market manipulations by letting the in-game prices converge towards their real world counterparts at a rate of 2% of the difference between the two per hour. Because of this market correction mechanism, pushing up prices (and screwing down prices) becomes increasingly difficult the higher (lower) the price already is. Whenever food prices reach a certain level, in-game stories are posted about hunger catastrophes happening somewhere far, far away (maybe with links to real world news stories). Whenever a player sells a certain number of shares with a sufficiently high margin, they are mentioned in that day's in-game financial news. Since the number of stock options is very limited, players will inevitably collide in their efforts to manipulate the market in their favor. Hopefully, it will also be a fun side-arena for guilds and covenants to fight each other. Question(s) What do you think of this mechanism given the criteria for peripheral game mechanics that I specified for my game? Do you have any ideas how the mechanic could be improved with regard to these criteria (or otherwise)? Could it be improved to allow for more expressive game-play, or involve an allusion to some other real world madness (like short selling, leveraging, or some other banking magic)? Are there any game-theoretic problems with this mechanic, like maybe certain dominant individual strategies that, collectively, lead to every player profiting and thus eliminating the idea of market manipulation PVP? Also, if you like (or dislike) this question, feel free to participate in the discussion on GDSE meta: "Should we be more lax with regard to SE's question/answer format to make game design questions possible?"

    Read the article

  • Ubuntu 11 and 12 initially fast but later bogs down, CPU pegged

    - by uos??
    I started with Ubuntu 11 a few weeks ago. It's on a DELL M4300 with a OCZ SSD. Default setup, except that I've installed the proprietary NVIDIA graphics and BROADCOM wireless drivers. Dual boot with Windows. If I cold boot into Ubuntu, it is very fast, just like the Windows experience that I'm used to. But SOMETHING happens, and I haven't yet determined what, but the system gets incredibly slow and stays that way. At first I thought it had to do with Adobe Flash because it seemed to be triggered by sites with Flash. But then I removed Flash and the problem remains. I thought it was just an overheating problem, but I've now upgraded to 12.04 which supposedly fixes the overheating problems I've read about. Perhaps the heat situation was brought on by Flash in my early cases? So I installed Jupiter for CPU management, but the thermometer reports a familiar Windows-side temperature of 53 degrees Celsius. Switching Jupiter to lower performance doesn't help. When I check the System Monitor application, sorting by CPU usage, there are no obvious problem processes. However, in the graphs tab, both CPU cores are pegged at 100%! I notice that the slowness seems to be similar to the extremely bad performance I got prior to installing the NVIDIA drivers. I'm not sure if that helps. This is the strangest part to me - although the temperature seems OK, even after rebooting, the system remains slow - starting with GRUB2 which is very noticeably delayed, all the way through to either Ubuntu or Windows! That's right, even the Windows side suffers effects and takes several minutes to complete booting whereas normally (with my SSD) it's ready to use in 15 seconds. The only way to fix it is to shutdown and let the parts cool down. Or maybe it just needs to completely power off and boot rather than a soft reboot, temperature has nothing to do with it? - is that possible? But know that I have never had this problem in Windows, even if Windows gets very hot (135 F) a reboot would be enough time for it to recover. For this reason, I don't think it's a heat thing, but I can't imagine what else could be surviving the reboot. I'm entirely updated - there are no pending updates. I have the Post-Release updates of NVIDIA too, btw. If this sounds CLOSE to something you know about, but one of the details doesn't line up exactly, it might be a mistake in my perception. Are there tests you can suggest to rule something out? Thanks! processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Core(TM)2 Duo CPU T9500 @ 2.60GHz stepping : 6 microcode : 0x60c cpu MHz : 800.000 cache size : 6144 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 lahf_lm ida dts tpr_shadow vnmi flexpriority bogomips : 5187.00 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Core(TM)2 Duo CPU T9500 @ 2.60GHz stepping : 6 microcode : 0x60c cpu MHz : 800.000 cache size : 6144 KB physical id : 0 siblings : 2 core id : 1 cpu cores : 2 apicid : 1 initial apicid : 1 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 lahf_lm ida dts tpr_shadow vnmi flexpriority bogomips : 5186.94 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: (Redundant figures removed. You can view them in the edits if they are still relevant) ps: %CPU PID USER COMMAND 9.4 2399 jason gnome-terminal 6.2 2408 jason bash 17.3 1117 root /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch -background none 13.7 1667 jason compiz 1.3 1960 jason /usr/lib/unity/unity-panel-service 1.3 1697 jason python /usr/bin/jupiter 0.9 1964 jason /usr/lib/indicator-appmenu/hud-service 0.6 1689 jason nautilus -n 0.4 1458 jason //bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session I should highlight specifically that GRUB2 can also be very slow. I don't know the relationship of which scenarios GRUB2 is also slow, but WHEN it is slow, it is slow both before the menu appears and after the selection is made - although for the diagnosis of GRUB2 it is harder for me to tell what the normal speeds should be. With SSD, I would expect that GRUB2 could load instantly, and that the GRUB2 purple would disappear instantly after the selection. The only delay to be expected is the change in graphics modes (though I couldn't guess why that ever requires any noticeable time)

    Read the article

  • SQL SERVER – Auditing and Profiling Database Made Easy with SQL Audit and Comply

    - by Pinal Dave
    Do you like auditing your database, or can you think of about a million other things you’d rather do?  Unfortunately, auditing is incredibly important.  As with tax audits, it is important to audit databases to ensure they are following all the rules, but they are also important for troubleshooting and security. There are several ways to audit SQL Server.  There is manual auditing, which is going through your database “by hand,” and obviously takes a long time and is quite inefficient.  SQL Server also provides programs to help you audit your systems.  Different administrators will have different opinions about best practices and which tools to use, and each one will be perfected for certain systems and certain users. Today, though, I would like to talk about Apex SQL Audit.  It is an auditing tool that acts like “track changes” in a word processing document.  It will log what has changed on the database, who made the changes, and what effects these changes have had (i.e. what objects were affected down the line).  All this information is logged, and can be easily viewed or printed for easy access. One of the best features of Apex is that it is so customizable (and easy to use!).  First, start Apex.  Then you can connect to the database you would like to monitor. Once you select your database, you can select which table you want to audit. You can customize right down to the field you’d like to audit, and then select which types of actions you’d like tracked – insert, delete, or update.  Repeat these steps for every database you want monitored. To create the logs, choose “Create triggers” in the menu.  The script written here will be what logs each insert, delete, and update function.  Press F5 to execute.  All this tracking information will be stored in AUDIT_LOG_DATA and AUDIT_LOG_TRANSACTIONS tables.  View these tables using ApexSQL Audit reports. These transaction logs can be extremely detailed – especially on very busy servers, where every move it traced.  Reading them can be overwhelming, to say the least.  Apex has tried to make things easier for the average DBA, though. You can read these tracking logs in Apex, and it will display data and objects that affect your server – even things that were happening on your server before you installed Apex! To read these logs, open Apex, and connect to that database you want to audit. Go to the Transaction Logs tab, and add the logs you want to read. To narrow down what results you want to see, you can use the Filter tab to choose time, operation type, name, users, and more. Click Open, and you can see the results in a grid (as shown below).  You can export these results to CSV, HTML, XML or SQL files and save on the hard disk. One of the advantages is that since there are no triggers here, there are no other processes that will affect SQL Server performance.  Using this method is also how to view history from your database that occurred before Apex was installed.  This type of tracking does require storage space for the data sources, as the database must be fully running, and the transaction logs must exist (things not stored in the transactions logs will not be recoverable). Apex can also replace SQL Server Profiler and SQL Server Traces – which are much more complex and error-prone – with its ApexSQL Comply.  It can do fault tolerant auditing, centralized reporting, and “who saw what” information in an easy-to-use interface.  The tracking settings can be altered by the user, or the default options will provide solutions to the most common auditing problems. To get started: open ApexSQL Comply, and selected Database Filter Settings to choose which database you’d like to audit.  You can select which tracking you’re like in Operation Types – DML, DDL, queries executed, execute statements, and more.  To get started, click Start Auditing. After this, every action will be stored in the central repository database (ApexSQLCrd).  You can view the audit and create a report (or view the standard default report) using a wizard. You can see how easy it is to use ApexSQL Comply.  You can easily set audits, including the type and time, and create customized reports.  Remote users can easily access the reports through the user interface (available online, as well), and security concerns are all taken care of by the program.  Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • Why lock-free data structures just aren't lock-free enough

    - by Alex.Davies
    Today's post will explore why the current ways to communicate between threads don't scale, and show you a possible way to build scalable parallel programming on top of shared memory. The problem with shared memory Soon, we will have dozens, hundreds and then millions of cores in our computers. It's inevitable, because individual cores just can't get much faster. At some point, that's going to mean that we have to rethink our architecture entirely, as millions of cores can't all access a shared memory space efficiently. But millions of cores are still a long way off, and in the meantime we'll see machines with dozens of cores, struggling with shared memory. Alex's tip: The best way for an application to make use of that increasing parallel power is to use a concurrency model like actors, that deals with synchronisation issues for you. Then, the maintainer of the actors framework can find the most efficient way to coordinate access to shared memory to allow your actors to pass messages to each other efficiently. At the moment, NAct uses the .NET thread pool and a few locks to marshal messages. It works well on dual and quad core machines, but it won't scale to more cores. Every time we use a lock, our core performs an atomic memory operation (eg. CAS) on a cell of memory representing the lock, so it's sure that no other core can possibly have that lock. This is very fast when the lock isn't contended, but we need to notify all the other cores, in case they held the cell of memory in a cache. As the number of cores increases, the total cost of a lock increases linearly. A lot of work has been done on "lock-free" data structures, which avoid locks by using atomic memory operations directly. These give fairly dramatic performance improvements, particularly on systems with a few (2 to 4) cores. The .NET 4 concurrent collections in System.Collections.Concurrent are mostly lock-free. However, lock-free data structures still don't scale indefinitely, because any use of an atomic memory operation still involves every core in the system. A sync-free data structure Some concurrent data structures are possible to write in a completely synchronization-free way, without using any atomic memory operations. One useful example is a single producer, single consumer (SPSC) queue. It's easy to write a sync-free fixed size SPSC queue using a circular buffer*. Slightly trickier is a queue that grows as needed. You can use a linked list to represent the queue, but if you leave the nodes to be garbage collected once you're done with them, the GC will need to involve all the cores in collecting the finished nodes. Instead, I've implemented a proof of concept inspired by this intel article which reuses the nodes by putting them in a second queue to send back to the producer. * In all these cases, you need to use memory barriers correctly, but these are local to a core, so don't have the same scalability problems as atomic memory operations. Performance tests I tried benchmarking my SPSC queue against the .NET ConcurrentQueue, and against a standard Queue protected by locks. In some ways, this isn't a fair comparison, because both of these support multiple producers and multiple consumers, but I'll come to that later. I started on my dual-core laptop, running a simple test that had one thread producing 64 bit integers, and another consuming them, to measure the pure overhead of the queue. So, nothing very interesting here. Both concurrent collections perform better than the lock-based one as expected, but there's not a lot to choose between the ConcurrentQueue and my SPSC queue. I was a little disappointed, but then, the .NET Framework team spent a lot longer optimising it than I did. So I dug out a more powerful machine that Red Gate's DBA tools team had been using for testing. It is a 6 core Intel i7 machine with hyperthreading, adding up to 12 logical cores. Now the results get more interesting. As I increased the number of producer-consumer pairs to 6 (to saturate all 12 logical cores), the locking approach was slow, and got even slower, as you'd expect. What I didn't expect to be so clear was the drop-off in performance of the lock-free ConcurrentQueue. I could see the machine only using about 20% of available CPU cycles when it should have been saturated. My interpretation is that as all the cores used atomic memory operations to safely access the queue, they ended up spending most of the time notifying each other about cache lines that need invalidating. The sync-free approach scaled perfectly, despite still working via shared memory, which after all, should still be a bottleneck. I can't quite believe that the results are so clear, so if you can think of any other effects that might cause them, please comment! Obviously, this benchmark isn't realistic because we're only measuring the overhead of the queue. Any real workload, even on a machine with 12 cores, would dwarf the overhead, and there'd be no point worrying about this effect. But would that be true on a machine with 100 cores? Still to be solved. The trouble is, you can't build many concurrent algorithms using only an SPSC queue to communicate. In particular, I can't see a way to build something as general purpose as actors on top of just SPSC queues. Fundamentally, an actor needs to be able to receive messages from multiple other actors, which seems to need an MPSC queue. I've been thinking about ways to build a sync-free MPSC queue out of multiple SPSC queues and some kind of sign-up mechanism. Hopefully I'll have something to tell you about soon, but leave a comment if you have any ideas.

    Read the article

  • Ranking - an Introduction

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved Ranking Ranking is quite common in the internet. Readers are asked to rank their latest reading by clicking on one of 5 (sometimes 10) stars. The number of stars is then converted to a number and the average number of stars as selected by all the readers is proudly (or shamefully) displayed for future readers. SharePoint 2007 lacked this feature altogether. SharePoint 2010 allows the users to rank items in a list or documents in a library (the two are actually the same because a library is actually a list). But in SP2010 the computation of the average is done later on a timer rather than on-the-spot as it should be. I suspect that the reason for this shortcoming is that they did not involve a mathematician! Let me explain. Ranking is kept in a related list. When a user rates a document the rank-list is added an item with the item id, the user name, and his number of stars. The fact that a user already ranked an item prevents him from ranking it again. This prevents the creator of the item from asking his mother to rank it a 5 and do it 753 times, thus stacking the ballot. Some systems will allow a user to change his rating and this will be done by updating the rank-list item. Now, when the timer kicks off, the list is spanned and for each item the rank-list items containing this id are summed up and divided by the number of votes thus yielding the new average. This is obviously very time consuming and very server intensive. In the 18th century an early actuary named James Dodson used what the great Augustus De Morgan (of De Morgan’s law) later named Commutation tables. The labor involved in computing a life insurance premium was staggering and also very error prone. Clerks with pencil and paper would multiply and add mountains of numbers to do the task. The more steps the greater the probability of error and the more expensive the process. Commutation tables created a “summary” of many steps and reduced the work 100 fold. So had Microsoft taken a lesson in the history of computation, they would have developed a much faster way for rating that may be done in real-time and is also 100 times faster and less CPU intensive. How do we do this? We use a form of commutation. We always keep the number of votes and the total of stars. One simple division gives us the average. So we write an event receiver. When a vote is added, we just add the stars to the total-stars and 1 to the number of votes. We then recomputed the average. When a vote is updated, we reduce the total by the old vote, increase it by the new vote and leave the number of votes the same. Again we do the division to get the new average. When a vote is deleted (highly unlikely and maybe even prohibited), we reduce the total by that vote and reduce the number of votes by 1… Gone are the days of spanning lists, counting items, and tallying votes and we have no need for a timer process to run it all. This is the first of a few treatises on ranking. Even though I discussed the math and the history thereof, in here I am only going to solve the presentation issue. I wanted to create the CSS and Jscript needed to display the stars, create the various effects like hovering and clicking (onmouseover, onmouseout, onclick, etc.) and I wanted to create a general solution with any number of stars. When I had it all done, I created the ranking game so that I could test it. The game is interesting in and on itself, so here it is (or go to the games page and select “rank the stars”). BTW, when you play it, look at the source code and see how it was all done.  Next, how the 5 stars are displayed in the New and Update forms. When the whole set of articles will be done, you’ll be able to create the complete solution. That’s all folks!

    Read the article

  • Using SQL Source Control with Fortress or Vault &ndash; Part 1

    - by AjarnMark
    I am fanatical when it comes to managing the source code for my company.  Everything that we build (in source form) gets put into our source control management system.  And I’m not just talking about the UI and middle-tier code written in C# and ASP.NET, but also the back-end database stuff, which at times has been a pain.  We even script out our Scheduled Jobs and keep a copy of those under source control. The UI and middle-tier stuff has long been easy to manage as we mostly use Visual Studio which has integration with source control systems built in.  But the SQL code has been a little harder to deal with.  I have been doing this for many years, well before Microsoft came up with Data Dude, so I had already established a methodology that, while not as smooth as VS, nonetheless let me keep things well controlled, and allowed doing my database development in my tool of choice, Query Analyzer in days gone by, and now SQL Server Management Studio.  It just makes sense to me that if I’m going to do database development, let’s use the database tool set.  (Although, I have to admit I was pretty impressed with the demo of Juneau that Don Box did at the PASS Summit this year.)  So as I was saying, I had developed a methodology that worked well for us (and I’ll probably outline in a future post) but it could use some improvement. When Solutions and Projects were first introduced in SQL Management Studio, I thought we were finally going to get our same experience that we have in Visual Studio.  Well, let’s say I was underwhelmed by Version 1 in SQL 2005, and apparently so were enough other people that by the time SQL 2008 came out, Microsoft decided that Solutions and Projects would be deprecated and completely removed from a future version.  So much for that idea. Then I came across SQL Source Control from Red-Gate.  I have used several tools from Red-Gate in the past, including my favorites SQL Compare, SQL Prompt, and SQL Refactor.  SQL Prompt is worth its weight in gold, and the others are great, too.  Earlier this year, we upgraded from our earlier product bundles to the new Developer Bundle, and in the process added SQL Source Control to our collection.  I thought this might really be the golden ticket I was looking for.  But my hopes were quickly dashed when I discovered that it only integrated with Microsoft Team Foundation Server and Subversion as the source code repositories.  We have been using SourceGear’s Vault and Fortress products for years, and I wholeheartedly endorse them.  So I was out of luck for the time being, although there were a number of people voting for Vault/Fortress support on their feedback forum (as did I) so I had hope that maybe next year I could look at it again. But just a couple of weeks ago, I was pleasantly surprised to receive notice in my email that Red-Gate had an Early Access version of SQL Source Control that worked with Vault and Fortress, so I quickly downloaded it and have been putting it through its paces.  So far, I really like what I see, and I have been quite impressed with Red-Gate’s responsiveness when I have contacted them with any issues or concerns that I have had.  I have had several communications with Gyorgy Pocsi at Red-Gate and he has been immensely helpful and responsive. I must say that development with SQL Source Control is very different from what I have been used to.  This post is getting long enough, so I’ll save some of the details for a separate write-up, but the short story is that in my regular mode, it’s all about the script files.  Script files are King and you dare not make a change to the database other than by way of a script file, or you are in deep trouble.  With SQL Source Control, you make your changes to your development database however you like.  I still prefer writing most of my changes in T-SQL, but you can also use any of the GUI functionality of SSMS to make your changes, and SQL Source Control “manages” the script for you.  Basically, when you first link your database to source control, the tool generates scripts for every primary object (tables and their indexes are together in one script, not broken out into separate scripts like DB Projects do) and those scripts are checked into your source control.  So, if you needed to, you could still do a GET from your source control repository and build the database from scratch.  But for the day-to-day work, SQL Source Control uses the same technique as SQL Compare to determine what changes have been made to your development database and how to represent those in your repository scripts.  I think that once I retrain myself to just work in the database and quit worrying about having to find and open the right script file, that this will actually make us more efficient. And for deployment purposes, SQL Source Control integrates with the full SQL Compare utility to produce a synchronization script (or do a live sync).  This is similar in concept to Microsoft’s DACPAC, if you’re familiar with that. If you are not currently keeping your database development efforts under source control, definitely examine this tool.  If you already have a methodology that is working for you, then I still think this is worth a review and comparison to your current approach.  You may find it more efficient.  But remember that the version which integrates with Vault/Fortress is still in pre-release mode, so treat it with a little caution.  I have found it to be fairly stable, but there was one bug that I found which had inconvenient side-effects and could have really been frustrating if I had been running this on my normal active development machine.  However, I can verify that that bug has been fixed in a more recent build version (did I mention Red-Gate’s responsiveness?).

    Read the article

  • Is Linear Tape File System (LTFS) Best For Transportable Storage?

    - by rickramsey
    Those of us in tape storage engineering take a lot of pride in what we do, but understand that tape is the right answer to a storage problem only some of the time. And, unfortunately for a storage medium with such a long history, it has built up a few preconceived notions that are no longer valid. When I hear customers debate whether to implement tape vs. disk, one of the common strikes against tape is its perceived lack of usability. If you could go back a few generations of corporate acquisitions, you would discover that StorageTek engineers recognized this problem and started developing a solution where a tape drive could look just like a memory stick to a user. The goal was to not have to care about where files were on the cartridge, but to simply see the list of files that were on the tape, and click on them to open them up. Eventually, our friends in tape over at IBM built upon our work at StorageTek and Sun Microsystems and released the Linear Tape File System (LTFS) feature for the current LTO5 generation of tape drives as an open specification. LTFS is really a wonderful feature and we’re proud to have taken part in its beginnings and, as you’ll soon read, its future. Today we offer LTFS-Open Edition, which is free for you to use in your in Oracle Enterprise Linux 5.5 environment - not only on your LTO5 drives, but also on your Oracle StorageTek T10000C drives. You can download it free from Oracle and try it out. LTFS does exactly what its forefathers imagined. Now you can see immediately which files are on a cartridge. LTFS does this by splitting a cartridge into two partitions. The first holds all of the necessary metadata to create a directory structure for you to easily view the contents of the cartridge. The second partition holds all of the files themselves. When tape media is loaded onto a drive, a complete file system image is presented to the user. Adding files to a cartridge can be as simple as a drag-and-drop just as you do today on your laptop when transferring files from your hard drive to a thumb drive or with standard POSIX file operations. You may be thinking all of this sounds nice, but asking, “when will I actually use it?” As I mentioned at the beginning, tape is not the right solution all of the time. However, if you ever need to physically move data between locations, tape storage with LTFS should be your most cost-effective and reliable answer. I will give you a few use cases examples of when LTFS can be utilized. Media and Entertainment (M&E), Oil and Gas (O&G), and other industries have a strong need for their storage to be transportable. For example, an O&G company hunting for new oil deposits in remote locations takes very large underground seismic images which need to be shipped back to a central data center. M&E operations conduct similar activities when shooting video for productions. M&E companies also often transfers files to third-parties for editing and other activities. These companies have three highly flawed options for transporting data: electronic transfer, disk storage transport, or tape storage transport. The first option, electronic transfer, is impractical because of the expense of the bandwidth required to transfer multi-terabyte files reliably and efficiently. If there’s one place that has bandwidth, it’s your local post office so many companies revert to physically shipping storage media. Typically, M&E companies rely on transporting disk storage between sites even though it, too, is expensive. Tape storage should be the preferred format because as IDC points out, “Tape is more suitable for physical transportation of large amounts of data as it is less vulnerable to mechanical damage during transportation compared with disk" (See note 1, below). However, tape storage has not been used in the past because of the restrictions created by proprietary formats. A tape may only be readable if both the sender and receiver have the same proprietary application used to write the file. In addition, the workflows may be slowed by the need to read the entire tape cartridge during recall. LTFS solves both of these problems, clearing the way for tape to become the standard platform for transferring large files. LTFS is open and, as long as you’ve downloaded the free reader from our website or that of anyone in the LTO consortium, you can read the data. So if a movie studio ships a scene to a third-party partner to add, for example, sounds effects or a music score, it doesn’t have to care what technology the third-party has. If it’s written back to an LTFS-formatted tape cartridge, it can be read. Some tape vendors like to claim LTFS is a “standard,” but beauty is in the eye of the beholder. It’s a specification at this point, not a standard. That said, we’re already seeing application vendors create functionality to write in an LTFS format based on the specification. And it’s my belief that both customers and the tape storage industry will see the most benefit if we all follow the same path. As such, we have volunteered to lead the way in making LTFS a standard first with the Storage Network Industry Association (SNIA), and eventually through to standard bodies such as American National Standards Institute (ANSI). Expect to hear good news soon about our efforts. So, if storage transportability is one of your requirements, I recommend giving LTFS a look. It makes tape much more user-friendly and it’s free, which allows tape to maintain all of its cost advantages over disk! Note 1 - IDC Report. April, 2011. “IDC’s Archival Storage Solutions Taxonomy, 2011” - Brian Zents Website Newsletter Facebook Twitter

    Read the article

< Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >