Search Results

Search found 5714 results on 229 pages for 'held packages'.

Page 197/229 | < Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >

  • solution for an offline server

    - by dashmug
    I'm trying to setup a development server at work that will ideally be able to test drive a couple of projects in PHP, Rails, or Django (not always running at the same time). I develop the apps locally on a Mac and then I'll put the projects up on this server for testing with my actual users (non-techies) before deploying to a production server. My problem is that we have a very poor internet connection (almost negligible) at work and doing the usual apt-get/yum/ports (make, clean, install) processes for setting up servers always get their packages from online repositories somewhere. I know I could probably download the source and then compile them myself but that's going to be too much of a hassle for me. I'm thinking about two solutions: Plan A: Run a server VM on my Mac and then use this VM as the source repository for the offline server. I've read about Ubuntu's apt-proxy and it seems to be good enough though I haven't tried it yet. I'm not sure if this is possible but can I simply do apt-get install nginx --downloadonly so that the package and its dependencies will be downloaded into my VM and my server can use the VM as the source repo for apt-get? Plan B: Run a server VM on my Mac (which I can setup/update easily when I'm home) and then clone the VM to the offline development server. Maybe I should simply make the server a VM host so I can simply copy the VM over. I think this is okay for the first-time setup but subsequent updates will take too long (cloning the VM image). If I was working on Windows, I imagine it'd be easier because most services have an installer file that I can download and then run at the server. If you could suggest another way, it would be much appreciated. Update: From Michael Hampton's answer, I found a possible solution which is apt-cacher. I also found this page on Ubuntu's website. I wonder if there is a better tool than this one.

    Read the article

  • Route all traffic of home network through VPN [migrated]

    - by user436118
    I have a typical semi advanced home network scenario: A cable modem - eth A wireless router (netgear n600) eth and wlan A home server (Running ubuntu 12.04 LTS, connected over wlan) A bunch of wireless clients (wlan) Lying around I have anoher cheaper wlan router, and two different USB wlan NIC's that are known to work with Linux. ACTA struck. I want to route ALL of my WAN traffic through a remote server through a VPN. For sake of completition, lets say there is a remote server running debian sqeeze where a VPN server is to be installed. The network is then to behave so that if the VPN is not operative, it is separated from the outside world. I am familiar with general system/network practices, but lack the specific detailed knowledge to accomplish this. Please suggest the right approach, packages and configurations you'd use to reach said solution. I've also envisioned the following network configuration, please improve it if you see fit: Client ip:10.1.1.x nm:255.0.0.0 gw:10.1.1.1 reached via WLAN Wlan router 1: ip: 10.1.1.1 nm:255.0.0.0 gw: 10.10.10.1 reached via ETH Homeserver: <<< VPN is initiated here, and the other endpoint is somewhere on the internet. eth0: ip:10.10.10.1 nm: 0.0.0.0 gw:192.168.0.1 reached via WLAN Homeserver: wlan0: ip: 192.168.0.2 nm: 255.255.255.0 gw: 192.168.0.1 reached via WLAN Wlan router 2: ip: 192.168.0.1 nm: 0.0.0.0 gw: set via dhcp uplink connector: cable modem Cable Modem: Remote DHCP. Has on-board DHCP server for ethernet device that connects to it, and only works this way. All this WLAN fussery is because my home server is located in a part of the house where a cable link isnt possible unfortunately.

    Read the article

  • apt unable to install particular version of package (but gives no error)

    - by Arc2009
    I'd like to install particular version of libstd++6 with following command: # apt-get install libstdc++6=4.9.0-8 -V Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: libstdc++6 (4.8.2-16) 0 upgraded, 0 newly installed, 0 to remove and 216 not upgraded. It gives no error, but apt keeps version that was already installed. And also it refers to this package as "extra". There's no apt preferences set in /etc/apt/preferences.d. And the desirable version is definetely available through our local mirror. (If I try to run "apt-get download libstdc++6=4.9.0-8" it will download exactly desirable version.) System info: # cat /etc/issue.net "Debian GNU/Linux jessie/sid" # uname -a Linux www27 3.13-1-amd64 #1 SMP Debian 3.13.7-1 (2014-03-25) x86_64 GNU/Linux. # dpkg -l |egrep -i "apt|dpkg" ii apt 0.9.16.1 amd64 commandline package manager ii dpkg 1.17.6 amd64 Debian package management system Any suggestions?

    Read the article

  • Install KVM based Windows 2008 remotely over SSH on a headless, no graphics Ubuntu 10.04 server?

    - by taazaa
    Hi, I have a Dell server at a remote data center with Ubuntu 10.04 as the host. It is a minimal install with the necessary virtulization packages. There is no X and the machine is headless. I have the win2008 DVD in the machine and want to remotely install it. I tried: virt-install --connect qemu:///system -n vmwin2k8 -r 1024 --disk path=server2k8.qcow2,size=50 --cdrom /dev/sr0 --vnc --noautoconsole --os-type windows --os-variant win2k8 The qcow2 image get created However I don't understand how to connect to see the install via VNC. This is my first time doing it so it may be trivial or may not be possible. Remotely I have a Win 7 machine with Putty and RealVNC viewer. Where is the graphic output of VNC going? Do I have to have VNC server running on the host or some other machine and then connect to it from my VNC client? Please let me know or point me to the right direction. I have been searching the web for several days to figure out how this is suppose to work. Thanks!

    Read the article

  • building a debian base image

    - by Michael
    Is there a preferred way to create base images for Debian-based customized installations? We are currently going with multistrap but although it's better than hand-crafted chroot stuff, it still has a lot of edges and corners. Is there a more reliable and less error-prone way to produce a root filesystem of a Debian installation with some additional .debs installed? (I don't want to send out a Debian installer with a preseed file though.) Addendum 1: To clarify things a bit: We are delivering some kind of software appliance to our customers. That is, a debian operating system, with some additional software packages -- both our own and third-party ones -- and some configuration changes. To ease the installation process, we have an installer that does nothing more than partitioning, copying files to the partitions and setting up grub. So it's basically an image-based installer. So we are basically running the debian installation ourselves and just distribute the already installed operating system. The question is about the installation part. I want to have that as easy and robust as possible, and of course, it should be an automated process.

    Read the article

  • unmanaged VPS account; Beginners questions

    - by pesar2
    I have a classified website which uses MySQL, PHP, Solr (java) etc etc... I wonder where I should start after purchasing a VPS package from my provider. There are first of all several packages, I am going with Linux because as far as I know it is the most stable system. But I have never used Linux before! What is Ubuntu, and which version of it should I get? Whats 64bit Ubuntu then? How do I install php, javascript, mysql, java and all that? What is debian, do I need it? What is apache, do I need that? And most importantly, what applications do I need, that I must have? (I mean applications which a beginner would never know was needed, what do you recommend?) After getting the vps, how do I even access it? Do I type in some kind of IP into the browser? Or is it by ftp program? How do I access the so called "terminal"? Please guide me, I am completely new to Linux and VPS! Thanks

    Read the article

  • Can't upgrade MySQL Server on new Ubuntu 12.04 install

    - by user179627
    After freshly installing Ubuntu server 12.04, I did the usual apt-get update / apt-get upgrade, which failed for mysql-server-5.5: Setting up mysql-server-5.5 (5.5.31-0ubuntu0.12.04.2) ... start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.5; however: Package mysql-server-5.5 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured I tried a wide variety a approaches suggested by googling, which involved various combinations of apt-get remove/purge/install -f/reinstall, etc., with no luck. I also tried downloading the package directly from launchpad.net and running dpkg -i on it (this had worked for a similar issue with a kernel upgrade), but to no avail. I'm not actually particularly interested in what's going on with mysql, per se (though I will need to figure it out at some time); at this point, my primary concern is that I am unable to apt-get install other packages! What to do?

    Read the article

  • Make eix available version match emerge

    - by Ryaner
    We have out Gentoo hosts using a binhost with EMERGE_DEFAULT_OPTS="--getbinpkgonly --usepkgonly" in the make.conf file so that the host only pulled down the binary hosts. All works well from that side. I use eix to check on software versions for upgrades but have hit a problem where eix will see an available version ahead of what is available on the binserver. Using glibc as an example ietpl [VE] / # emerge -s glibc Searching... [ Results for search key : glibc ] [ Applications found : 1 ] * sys-libs/glibc Latest version available: 2.14.1-r3 Latest version installed: 2.14.1-r3 Homepage: Description: GNU libc6 (also called glibc2) C library License: LGPL-2 Then eix reports a higher version available ietpl [VE] / # export LASTVERSION='{last}<version>{}' ietpl [VE] / # /usr/bin/eix --nocolor --format '<category> <name> [<installedversions:LASTVERSION>] [<bestversion:LASTVERSION>] \n' --exact --category-name sys-libs/glibc sys-libs glibc [2.14.1-r3] [2.15-r2] What I'm after is for eix to report the latest version available as 2.14.1-r3 like emerge. I've a feeling this is possible since without any formatting, eix returns Available versions: (2.2) ~2.9_p20081201-r3!s 2.10.1-r1!s 2.11.3!s ~2.12.1-r3!s 2.12.2!s{tbz2} ~2.13-r2!s 2.13-r4!s ~2.14!s ~2.14.1-r2!s 2.14.1-r3!s{tbz2} ~2.15-r1!s 2.15-r2!s ~2.15-r3!s **2.16.0!s **9999!s correctly tagging the latest unmasked binary package with {tbz2} I would have thought that the binary flag would do it, but that returns no matches --binary Match packages with *.tbz2 files.

    Read the article

  • jQuery wrap code after x number of elements

    - by Al
    Hi all - I have a UL that contain any number of LIs. I am trying to create some jQuery code that will parse the original UL and wrap a UL and another LI after every 5 of the original LIs. Starting HTML: <ul id="original_ul"> <li class="original_li">..</li> <li>..</li> <li>..</li> <li>..</li> <li>..</li> <li>..</li> <li>..</li> <li>..</li> <li>..</li> <li>..</li> </ul> Required HTML: <ul id="processed_ul"> <li class="new_wrap_li"> <ul class="new_wrap_ul"> <li class="original_li">..</li> <li>..</li> <li>..</li> <li>..</li> <li>..</li> </ul><!-- end new wrap ul --> </li><!-- end new wrap li --> <li class="new_wrap_li"> <ul class="new_wrap_ul"> <li class="original_li">..</li> <li>..</li> <li>..</li> <li>..</li> <li>..</li> </ul><!-- end new wrap ul --> </li><!-- end new wrap li --> </ul><!-- end processed ul --> I've been using the .each function to go through the LIs and append them to the new processed ul held inside a temp div... now I just need to wrap the new LI and UL around every 5 LIs. Thanks in advance!! Al

    Read the article

  • Sharepoint list fields stored in content database?

    - by nav
    Hi, I am trying to find out where data for Sharepoint list fields are stored in the content database. For example from the AllLists table filtering on the listid i am intrested in I can derive the following from the tp_ContentTypes column on the field I'm interested in: <Field Type="CascadingDropDownListFieldWithFilter" DisplayName="Secondary Subject" Required="FALSE" ID="{b4143ff9-d5a4-468f-8793-7bb2f06b02a0}" SourceID="{6c1e9bbf-4f02-49fd-8e6c-87dd9f26158a}" StaticName="Secondary_x0020_Subject" Name="Secondary_x0020_Subject" ColName="nvarchar13" RowOrdinal="0" Version="1"><Customization><ArrayOfProperty><Property><Name>SiteUrl</Name><Value xmlns:q1="http://www.w3.org/2001/XMLSchema" p4:type="q1:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">http://lvis.amberlnk.net/knowhow</Value></Property><Property><Name>CddlName</Name><Value xmlns:q2="http://www.w3.org/2001/XMLSchema" p4:type="q2:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">SS</Value></Property><Property><Name>CddlParentName</Name><Value xmlns:q3="http://www.w3.org/2001/XMLSchema" p4:type="q3:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">PS</Value></Property><Property><Name>CddlChildName</Name><Value xmlns:q4="http://www.w3.org/2001/XMLSchema" p4:type="q4:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance"/></Property><Property><Name>ListName</Name><Value xmlns:q5="http://www.w3.org/2001/XMLSchema" p4:type="q5:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">Secondary</Value></Property><Property><Name>ListTextField</Name><Value xmlns:q6="http://www.w3.org/2001/XMLSchema" p4:type="q6:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">Title</Value></Property><Property><Name>ListValueField</Name><Value xmlns:q7="http://www.w3.org/2001/XMLSchema" p4:type="q7:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">Title</Value></Property><Property><Name>JoinField</Name><Value xmlns:q8="http://www.w3.org/2001/XMLSchema" p4:type="q8:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">Primary</Value></Property><Property><Name>FilterField</Name><Value xmlns:q9="http://www.w3.org/2001/XMLSchema" p4:type="q9:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">Content Type ID</Value></Property><Property><Name>FilterOperator</Name><Value xmlns:q10="http://www.w3.org/2001/XMLSchema" p4:type="q10:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">Show All</Value></Property><Property><Name>FilterValue</Name><Value xmlns:q11="http://www.w3.org/2001/XMLSchema" p4:type="q11:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance"/></Property></ArrayOfProperty></Customization></Field> Which table do I need to query to find the data held on this field? Many Thanks Nav

    Read the article

  • Synchronization requirements for FileStream.(Begin/End)(Read/Write)

    - by Doug McClean
    Is the following pattern of multi-threaded calls acceptable to a .Net FileStream? Several threads calling a method like this: ulong offset = whatever; // different for each thread byte[] buffer = new byte[8192]; object state = someState; // unique for each call, hence also for each thread lock(theFile) { theFile.Seek(whatever, SeekOrigin.Begin); IAsyncResult result = theFile.BeginRead(buffer, 0, 8192, AcceptResults, state); } if(result.CompletedSynchronously) { // is it required for us to call AcceptResults ourselves in this case? // or did BeginRead already call it for us, on this thread or another? } Where AcceptResults is: void AcceptResults(IAsyncResult result) { lock(theFile) { int bytesRead = theFile.EndRead(result); // if we guarantee that the offset of the original call was at least 8192 bytes from // the end of the file, and thus all 8192 bytes exist, can the FileStream read still // actually read fewer bytes than that? // either: if(bytesRead != 8192) { Panic("Page read borked"); } // or: // issue a new call to begin read, moving the offsets into the FileStream and // the buffer, and decreasing the requested size of the read to whatever remains of the buffer } } I'm confused because the documentation seems unclear to me. For example, the FileStream class says: Any public static members of this type are thread safe. Any instance members are not guaranteed to be thread safe. But the documentation for BeginRead seems to contemplate having multiple read requests in flight: Multiple simultaneous asynchronous requests render the request completion order uncertain. Are multiple reads permitted to be in flight or not? Writes? Is this the appropriate way to secure the location of the Position of the stream between the call to Seek and the call to BeginRead? Or does that lock need to be held all the way to EndRead, hence only one read or write in flight at a time? I understand that the callback will occur on a different thread, and my handling of state, buffer handle that in a way that would permit multiple in flight reads. Further, does anyone know where in the documentation to find the answers to these questions? Or an article written by someone in the know? I've been searching and can't find anything. Relevant documentation: FileStream class Seek method BeginRead method EndRead IAsyncResult interface

    Read the article

  • SIMPLE OpenSSL RSA Encryption in C/C++ is causing me headaches

    - by Josh
    Hey guys, I'm having some trouble figuring out how to do this. Basically I just want a client and server to be able to send each other encrypted messages. This is going to be incredibly insecure because I'm trying to figure this all out so I might as well start at the ground floor. So far I've got all the keys working but encryption/decryption is giving me hell. I'll start by saying I am using C++ but most of these functions require C strings so whatever I'm doing may be causing problems. Note that on the client side I receive the following error in regards to decryption. error:04065072:rsa routines:RSA_EAY_PRIVATE_DECRYPT:padding check failed I don't really understand how padding works so I don't know how to fix it. Anywho here are the relevant variables on each side followed by the code. Client: RSA *myKey; // Loaded with private key // The below will hold the decrypted message unsigned char* decrypted = (unsigned char*) malloc(RSA_size(myKey)); /* The below holds the encrypted string received over the network. Originally held in a C-string but C strings never work for me and scare me so I put it in a C++ string */ string encrypted; // The reinterpret_cast line was to get rid of an error message. // Maybe the cause of one of my problems? if(RSA_private_decrypt(sizeof(encrypted.c_str()), reinterpret_cast<const unsigned char*>(encrypted.c_str()), decrypted, myKey, RSA_PKCS1_OAEP_PADDING)==-1) { cout << "Private decryption failed" << endl; ERR_error_string(ERR_peek_last_error(), errBuf); printf("Error: %s\n", errBuf); free(decrypted); exit(1); } Server: RSA *pkey; // Holds the client's public key string key; // Holds a session key I want to encrypt and send //The below will hold the encrypted message unsigned char *encrypted = (unsigned char*)malloc(RSA_size(pkey)); // The reinterpret_cast line was to get rid of an error message. // Maybe the cause of one of my problems? if(RSA_public_encrypt(sizeof(key.c_str()), reinterpret_cast<const unsigned char*>(key.c_str()), encrypted, pkey, RSA_PKCS1_OAEP_PADDING)==-1) { cout << "Public encryption failed" << endl; ERR_error_string(ERR_peek_last_error(), errBuf); printf("Error: %s\n", errBuf); free(encrypted); exit(1); } Let me once again state, in case I didn't before, that I know my code sucks but I'm just trying to establish a framework for understanding this. I'm sorry if this offends you veteran coders. Thanks in advance for any help you guys can provide!

    Read the article

  • .NET RegEx "Memory Leak" investigation

    - by Kevin Pullin
    I recently looked into some .NET "memory leaks" (i.e. unexpected, lingering GC rooted objects) in a WinForms app. After loading and then closing a huge report, the memory usage did not drop as expected even after a couple of gen2 collections. Assuming that the reporting control was being kept alive by a stray event handler I cracked open WinDbg to see what was happening... Using WinDbg, the !dumpheap -stat command reported a large amount of memory was consumed by string instances. Further refining this down with the !dumpheap -type System.String command I found the culprit, a 90MB string used for the report, at address 03be7930. The last step was to invoke !gcroot 03be7930 to see which object(s) were keeping it alive. My expectations were incorrect - it was not an unhooked event handler hanging onto the reporting control (and report string), but instead it was held on by a System.Text.RegularExpressions.RegexInterpreter instance, which itself is a descendant of a System.Text.RegularExpressions.CachedCodeEntry. Now, the caching of Regexs is (somewhat) common knowledge as this helps to reduce the overhead of having to recompile the Regex each time it is used. But what then does this have to do with keeping my string alive? Based on analysis using Reflector, it turns out that the input string is stored in the RegexInterpreter whenever a Regex method is called. The RegexInterpreter holds onto this string reference until a new string is fed into it by a subsequent Regex method invocation. I'd expect similar behaviour by hanging onto Regex.Match instances and perhaps others. The chain is something like this: Regex.Split, Regex.Match, Regex.Replace, etc Regex.Run RegexScanner.Scan (RegexScanner is the base class, RegexInterpreter is the subclass described above). The offending Regex is only used for reporting, rarely used, and therefore unlikely to be used again to clear out the existing report string. And even if the Regex was used at a later point, it would probably be processing another large report. This is a relatively significant problem and just plain feels dirty. All that said, I found a few options on how to resolve, or at least work around, this scenario. I'll let the community respond first and if no takers come forward I will fill in any gaps in a day or two.

    Read the article

  • int, short, byte performance in back-to-back for-loops

    - by runrunraygun
    (background: http://stackoverflow.com/questions/1097467/why-should-i-use-int-instead-of-a-byte-or-short-in-c) To satisfy my own curiosity about the pros and cons of using the "appropriate size" integer vs the "optimized" integer i wrote the following code which reinforced what I previously held true about int performance in .Net (and which is explained in the link above) which is that it is optimized for int performance rather than short or byte. DateTime t; long a, b, c; t = DateTime.Now; for (int index = 0; index < 127; index++) { Console.WriteLine(index.ToString()); } a = DateTime.Now.Ticks - t.Ticks; t = DateTime.Now; for (short index = 0; index < 127; index++) { Console.WriteLine(index.ToString()); } b=DateTime.Now.Ticks - t.Ticks; t = DateTime.Now; for (byte index = 0; index < 127; index++) { Console.WriteLine(index.ToString()); } c=DateTime.Now.Ticks - t.Ticks; Console.WriteLine(a.ToString()); Console.WriteLine(b.ToString()); Console.WriteLine(c.ToString()); This gives roughly consistent results in the area of... ~950000 ~2000000 ~1700000 which is in line with what i would expect to see. However when I try repeating the loops for each data type like this... t = DateTime.Now; for (int index = 0; index < 127; index++) { Console.WriteLine(index.ToString()); } for (int index = 0; index < 127; index++) { Console.WriteLine(index.ToString()); } for (int index = 0; index < 127; index++) { Console.WriteLine(index.ToString()); } a = DateTime.Now.Ticks - t.Ticks; the numbers are more like... ~4500000 ~3100000 ~300000 Which I find puzzling. Can anyone offer an explanation? NOTE: In the interest of compairing like for like i've limited the loops to 127 because of the range of the byte value type. Also this is an act of curiosity not production code micro-optimization.

    Read the article

  • MS Access 2003 - Unbound Form uses INSERT statement to save to table; what about subforms?

    - by Justin
    So I have an unbound form that I use to save data to a table on button click. Is there a way I can have subforms for entry that will allow me to save data to the table within that same button click? Basically I want to add more entry options for the user, and while I know other ways to do it, I am particularly curious about doing it this way (if it can be done). So lets say the 'parent form' is frmMain. And there are two child forms "sub1" and "sub2". Just for example sake lets say on frmMain there are two text boxes: txtTitle & txtAuthor. sub1 and sub2 both have a text Box on them that represent something like prices. The idea is Title & author of a book, and then a price at each store (simplified). So I tried this (because I thought it was worth a shot): Dim db as DAO.database Dim sql as String sql = "INSERT INTO (Title, Author, PriceA, PriceB) VALUES (" if not isnull(me.txtTitle) then sql = sql & """" & me.txtTitle & """," Else sql = sql & " NULL," End If if not IsNull(me.txtAuthor) then sql = sql & " """ & me.txtAuthor & """," else sql = sql & " NULL," end if if not IsNull (forms!sub1.txtPrice) then sql = sql & " """ & forms!sub1.txtPrice & """," else sql = sql & " NULL," end if without finishing the code, i think you may see the GOTCHA i am headed for. I tried this and got an "Access cannot find the form "" ". I think I can pretty much see why on this approach too, because when I click the button that calls the new sub form into the parent form, the values that were just entered are not held/saved as sub1 closes and sub2 opens. I should mention that the idea above is not intended to be a one or the other approach, rather both sub forms used everytime. so this is an example. i want to use this method (if possible) to have about 7 different sub form choices in one form, and be able to save to a table via a SQL statement. I realize that there may be better ways, but I am just wondering if I can get there with this approach out of curiousity. Thanks as always!

    Read the article

  • CreateProcessAsUser error 1314

    - by kampi
    Hi! I want create a process under another user. So I use LogonUser and CreateProcessAsUser. But my problem is, that CreatePtocessAsUser always returns the errorcode 1314, which means "A required privilige is not held by the client". So my question is, what I am doing wrong? Or how can i give the priviliges to the handle? (I think the handle should have the privileges, or I am wrong?) Sorry for my english mistakes, but my english knowledge isn't the best :) Plesase help if anyone knows how to correct my application. This a part of my code. STARTUPINFO StartInfo; PROCESS_INFORMATION ProcInfo; TOKEN_PRIVILEGES tp; memset(&ProcInfo, 0, sizeof(ProcInfo)); memset(&StartInfo, 0 , sizeof(StartInfo)); StartInfo.cb = sizeof(StartInfo); HANDLE handle = NULL; if (!OpenProcessToken(GetCurrentProcess(), TOKEN_ALL_ACCESS, &handle)) printf("\nOpenProcessError"); if (!LookupPrivilegeValue(NULL,SE_TCB_NAME, //SE_TCB_NAME, &tp.Privileges[0].Luid)) { printf("\nLookupPriv error"); } tp.PrivilegeCount = 1; tp.Privileges[0].Attributes = SE_PRIVILEGE_ENABLED;//SE_PRIVILEGE_ENABLED; if (!AdjustTokenPrivileges(handle, FALSE, &tp, 0, NULL, 0)) { printf("\nAdjustToken error"); } i = LogonUser(user, domain, password, LOGON32_LOGON_INTERACTIVE, LOGON32_PROVIDER_DEFAULT, &handle); printf("\nLogonUser return : %d",i); i = GetLastError(); printf("\nLogonUser getlast : %d",i); if (! ImpersonateLoggedOnUser(handle) ) printf("\nImpLoggedOnUser!"); i = CreateProcessAsUser(handle, "c:\\windows\\system32\\notepad.exe",NULL, NULL, NULL, true, CREATE_UNICODE_ENVIRONMENT |NORMAL_PRIORITY_CLASS | CREATE_NEW_CONSOLE, NULL, NULL, &StartInfo, &ProcInfo); printf("\nCreateProcessAsUser return : %d",i); i = GetLastError(); printf("\nCreateProcessAsUser getlast : %d",i); CloseHandle(handle); CloseHandle(ProcInfo.hProcess); CloseHandle(ProcInfo.hThread); Thanks in advance!

    Read the article

  • Is the design notion of layers contrived?

    - by Bruce
    Hi all I'm reading through Eric Evans' awesome work, Domain-Driven Design. However, I can't help feeling that the 'layers' model is contrived. To expand on that statement, it seems as if it tries to shoe-horn various concepts into a specific, neat model, that of layers talking to each other. It seems to me that the layers model is too simplified to actually capture the way that (good) software works. To expand further: Evans says: "Partition a complex program into layers. Develop a design within each layer that is cohesive and that depends only on the layers below. Follow standard architectural patterns to provide loose coupling to the layers above." Maybe I'm misunderstanding what 'depends' means, but as far as I can see, it can either mean a) Class X (in the UI for example) has a reference to a concrete class Y (in the main application) or b) Class X has a reference to a class Y-ish object providing class Y-ish services (ie a reference held as an interface). If it means (a), then this is clearly a bad thing, since it defeats re-using the UI as a front-end to some other application that provides Y-ish functionality. But if it means (b), then how is the UI any more dependent on the application, than the application is dependent on the UI? Both are decoupled from each other as much as they can be while still talking to each other. Evans' layer model of dependencies going one way seems too neat. First, isn't it more accurate to say that each area of the design provides a module that is pretty much an island to itself, and that ideally all communication is through interfaces, in a contract-driven/responsibility-driven paradigm? (ie, the 'dependency only on lower layers' is contrived). Likewise with the domain layer talking to the database - the domain layer is as decoupled (through DAO etc) from the database as the database is from the domain layer. Neither is dependent on the other, both can be swapped out. Second, the idea of a conceptual straight line (as in from one layer to the next) is artificial - isn't there more a network of intercommunicating but separate modules, including external services, utility services and so on, branching off at different angles? Thanks all - hoping that your responses can clarify my understanding on this..

    Read the article

  • jQuery.closest(); traversing down the DOM not up

    - by Alex
    Afternoon peoples. I am having a bit of a nightmare traversing a DOM tree properly. I have the following markup <div class="node" id="first-wh"> <div class="content-heading has-tools"> <div class="tool-menu" style="position: relative"> <span class="menu-open stepper-down"></span> <ul class="tool-menu-tools" style="display:none;"> <li><img src="/resources/includes/images/layout/tools-menu/edit22.png" /> Edit <input type="hidden" class="variables" value="edit,hobbies,text,/theurl" /></li> <li>Menu 2</li> <li>Menu 3</li> </ul> </div> <h3>Employment History</h3></div> <div class="content-body editable disabled"> <h3 class="dates">1st January 2010 - 10th June 2010</h3> <h3>Company</h3> <h4>Some Company</h4> <h3>Job Title</h3> <h4>IT Manager</h4> <h3>Job Description</h3> <p class="desc">I headed up the IT department for all things concerning IT and infrastructure</p> <h3>Roles &amp; Responsibilities</h3> <p class="desc">It is a long established fact that a reader will be distracted by the readable content of a page when looking at its layout. The point of using Lorem Ipsum is that it has a more-or-less normal distribution of letters, as opposed to using 'Content here, content here', making it look like readable English. Many desktop publishing packages and web page editors now use Lorem Ipsum as their default model text, and a search for 'lorem ipsum' will uncover many web sites still in their infancy. Various versions have evolved over the years, sometimes by accident, sometimes on purpose (injected humour and the like).</p> </div> <div class="content-body edit-node edit-node-hide"> <input class="variables" type="hidden" value="id,function-id" /> <h3 class="element-title">Employment Dates</h3> <span class="label">From:</span> <input class="edit-mode date date-from" type="text" value="date" /> <span class="label">To:</span> <input class="edit-mode date date-to" type="text" value="date" /> <h3 class="element-title">Company</h3> <input class="edit-mode" type="text" value="The company I worked for" /> <h3 class="element-title">Job Title</h3> <input class="edit-mode" type="text" value="My job title" /> <h3 class="element-title">Job Description</h3> <textarea class="edit-mode" type="text">The Job Title</textarea> <h3 class="element-title">Roles &amp; Responsibilities</h3> <textarea class="edit-mode" type="text">It is a long established fact that a reader will be distracted by the readable content of a page when looking at its layout. The point of using Lorem Ipsum is that it has a more-or-less normal distribution of letters, as opposed to using 'Content here, content here', making it look like readable English. Many desktop publishing packages and web page editors now use Lorem Ipsum as their default model text, and a search for 'lorem ipsum' will uncover many web sites still in their infancy. Various versions have evolved over the years, sometimes by accident, sometimes on purpose (injected humour and the like).</textarea> <div class="node-actions"> <input type="checkbox" class="checkdisable" value="This is a checkbox"/>This element is visible .<br /> <input type="submit" class="account-button save" value="Save" /> <input type="submit" class="account-button cancel" value="Cancel" /></div> </div></div> ... And I am trying to traverse from input.save at the bottom right the way up to div.node... This all works well with one copy of the markup but if I duplicate it (obvisouly changing the ID of the uppermost div.node and use jQuery.closest('div.node') for the upper of the div.node's it will return the element below it not the element above it (which is the right one). I've tried using parents() but that also has it's caveats. Is there some kind of contexyt that can be attached to closest to make it go up and not down? or is there a better way to do this. jQuery code below. $(".save").click(function(){ var element=$(this); var enodes=element.parents('.edit-node').find('input.variables'); var variables=enodes.val(); var onode=element.closest('div.node').find('.editable'); var enode=element.closest('div.node').find('.edit-node-hide'); var vnode=element.closest('div.node-actions').find('input.checkdisable'); var isvis=(vnode.is(":checked")) ? onode.removeClass('disabled') : onode.addClass('disabled'); onode.slideDown(200); enode.fadeOut(100); }); Thanks in advance. Alex P.S It seems that stackoverflow has done something weird to the markup! - I just triple checked it and it is fine but for some reason it's concate'd it below

    Read the article

  • Is the Scala 2.8 collections library a case of "the longest suicide note in history" ?

    - by oxbow_lakes
    First note the inflammatory subject title is a quotation made about the manifesto of a UK political party in the early 1980s. This question is subjective but it is a genuine question, I've made it CW and I'd like some opinions on the matter. Despite whatever my wife and coworkers keep telling me, I don't think I'm an idiot: I have a good degree in mathematics from the University of Oxford and I've been programming commercially for almost 12 years and in Scala for about a year (also commercially). I have just started to look at the Scala collections library re-implementation which is coming in the imminent 2.8 release. Those familiar with the library from 2.7 will notice that the library, from a usage perspective, has changed little. For example... > List("Paris", "London").map(_.length) res0: List[Int] List(5, 6) ...would work in either versions. The library is eminently useable: in fact it's fantastic. However, those previously unfamiliar with Scala and poking around to get a feel for the language now have to make sense of method signatures like: def map[B, That](f: A => B)(implicit bf: CanBuildFrom[Repr, B, That]): That For such simple functionality, this is a daunting signature and one which I find myself struggling to understand. Not that I think Scala was ever likely to be the next Java (or /C/C++/C#) - I don't believe its creators were aiming it at that market - but I think it is/was certainly feasible for Scala to become the next Ruby or Python (i.e. to gain a significant commercial user-base) Is this going to put people off coming to Scala? Is this going to give Scala a bad name in the commercial world as an academic plaything that only dedicated PhD students can understand? Are CTOs and heads of software going to get scared off? Was the library re-design a sensible idea? If you're using Scala commercially, are you worried about this? Are you planning to adopt 2.8 immediately or wait to see what happens? Steve Yegge once attacked Scala (mistakenly in my opinion) for what he saw as its overcomplicated type-system. I worry that someone is going to have a field day spreading fud with this API (similarly to how Josh Bloch scared the JCP out of adding closures to Java). Note - I should be clear that, whilst I believe that Josh Bloch was influential in the rejection of the BGGA closures proposal, I don't ascribe this to anything other than his honestly-held beliefs that the proposal represented a mistake.

    Read the article

  • Deleting items from a ListView using a custom BaseAdapter

    - by HXCaine
    I am using a customised BaseAdapter to display items on a ListView. The items are just strings held in an ArrayList. The list items have a delete button on them (big red X), and I'd like to remove the item from the ArrayList, and notify the ListView to update itself. However, every implementation I've tried gets mysterious position numbers given to it, so for example clicking item 2's delete button will delete item 5's. It seems to be almost entirely random. One thing to note is that elements may be repeated, but must be kept in the same order. For example, I can have "Irish" twice, as elements 3 and 7. My code is below: private static class ViewHolder { TextView lang; int position; } public View getView(final int position, View convertView, ViewGroup parent) { ViewHolder holder; if (convertView == null) { convertView = mInflater.inflate(R.layout.language_link_row, null); holder = new ViewHolder(); holder.lang = (TextView)convertView.findViewById(R.id.language_link_text); holder.position = position; final ImageView deleteButton = (ImageView) convertView.findViewById(R.id.language_link_cross_delete); deleteButton.setOnClickListener(this); convertView.setTag(holder); deleteButton.setTag(holder); } else { holder = (ViewHolder) convertView.getTag(); } holder.lang.setText(mLanguages.get(position)); return convertView; } I later attempt to retrieve the deleted element's position by grabbing the tag, but it's always the wrong position in the list. There is no noticeable pattern to the position given here, it always seems random. // The delete button's listener public void onClick(View v) { ViewHolder deleteHolder = (ViewHolder) v.getTag(); int pos = deleteHolder.position; ... ... ... } I would be quite happy to just delete the item from the ArrayList and have the ListView update itself, but the position I'm getting is incorrect so I can't do that. Please note that I did, at first, have the deleteButton clickListener inside the getView method, and used 'position' to delete the value, but I had the same problem. Any suggestions appreciated, this is really irritating me.

    Read the article

  • Delay keyboard input help

    - by Stradigos
    I'm so close! I'm using the XNA Game State Management example found here and trying to modify how it handles input so I can delay the key/create an input buffer. In GameplayScreen.cs I've declared a double called elapsedTime and set it equal to 0. In the HandleInput method I've changed the Key.Right button press to: if (keyboardState.IsKeyDown(Keys.Left)) movement.X -= 50; if (keyboardState.IsKeyDown(Keys.Right)) { elapsedTime -= gameTime.ElapsedGameTime.TotalMilliseconds; if (elapsedTime <= 0) { movement.X += 50; elapsedTime = 10; } } else { elapsedTime = 0; } The pseudo code: If the right arrow key is not pressed set elapsedTime to 0. If it is pressed, the elapsedTime equals itself minus the milliseconds since the last frame. If the difference then equals 0 or less, move the object 50, and then set the elapsedTime to 10 (the delay). If the key is being held down elapsedTime should never be set to 0 via the else. Instead, after elapsedTime is set to 10 after a successful check, the elapsedTime should get lower and lower because it's being subtracted by the TotalMilliseconds. When that reaches 0, it successfully passes the check again and moves the object once more. The problem is, it moves the object once per press but doesn't work if you hold it down. Can anyone offer any sort of tip/example/bit of knowledge towards this? Thanks in advance, it's been driving me nuts. In theory I thought this would for sure work. CLARIFICATION Think of a grid when your thinking about how I want the block to move. Instead of just fluidly moving across the screen, it's moving by it's width (sorta jumping) to the next position. If I hold down the key, it races across the screen. I want to slow this whole process down so that holding the key creates an X millisecond delay between it 'jumping'/moving by it's width. EDIT: Turns out gameTime.ElapsedGameTime.TotalMilliseconds is returning 0... all of the time. I have no idea why.

    Read the article

  • How do you stop a user-instance of Sql Server? (Sql Express user instance database files locked, eve

    - by Bittercoder
    When using SQL Server Express 2005's User Instance feature with a connection string like this: <add name="Default" connectionString="Data Source=.\SQLExpress; AttachDbFilename=C:\My App\Data\MyApp.mdf; Initial Catalog=MyApp; User Instance=True; MultipleActiveResultSets=true; Trusted_Connection=Yes;" /> We find that we can't copy the database files MyApp.mdf and MyApp_Log.ldf (because they're locked) even after stopping the SqlExpress service, and have to resort to setting the SqlExpress service from automatic to manual startup mode, and then restarting the machine, before we can then copy the files. It was my understanding that stopping the SqlExpress service should stop all the user instances as well, which should release the locks on those files. But this does not seem to be the case - could anyone shed some light on how to stop a user instance, such that it's database files are no longer locked? Update OK, I stopped being lazy and fired up Process Explorer. Lock was held by sqlserver.exe - but there are two instances of sql server: sqlserver.exe PID: 4680 User Name: DefaultAppPool sqlserver.exe PID: 4644 User Name: NETWORK SERVICE The file is open by the sqlserver.exe instance with the PID: 4680 Stopping the "SQL Server (SQLEXPRESS)" service, killed off the process with PID: 4644, but left PID: 4680 alone. Seeing as the owner of the remaining process was DefaultAppPool, next thing I tried was stopping IIS (this database is being used from an ASP.Net application). Unfortunately this didn't kill the process off either. Manually killing off the remaining sql server process does remove the open file handle on the database files, allowing them to be copied/moved. Unfortunately I wish to copy/restore those files in some pre/post install tasks of a WiX installer - as such I was hoping there might be a way to achieve this by stopping a windows service, rather then having to shell out to kill all instances of sqlserver.exe as that poses some problems: Killing all the sqlserver.exe instances may have undesirable consequencies for users with other Sql Server instances on their machines. I can't restart those instances easily. Introduces additional complexities into the installer. Does anyone have any further thoughts on how to shutdown instances of sql server associated with a specific user instance?

    Read the article

  • Memory management with Objective-C Distributed Objects: my temporary instances live forever!

    - by jkp
    I'm playing with Objective-C Distributed Objects and I'm having some problems understanding how memory management works under the system. The example given below illustrates my problem: Protocol.h #import <Foundation/Foundation.h> @protocol DOServer - (byref id)createTarget; @end Server.m #import <Foundation/Foundation.h> #import "Protocol.h" @interface DOTarget : NSObject @end @interface DOServer : NSObject < DOServer > @end @implementation DOTarget - (id)init { if ((self = [super init])) { NSLog(@"Target created"); } return self; } - (void)dealloc { NSLog(@"Target destroyed"); [super dealloc]; } @end @implementation DOServer - (byref id)createTarget { return [[[DOTarget alloc] init] autorelease]; } @end int main() { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; DOServer *server = [[DOServer alloc] init]; NSConnection *connection = [[NSConnection new] autorelease]; [connection setRootObject:server]; if ([connection registerName:@"test-server"] == NO) { NSLog(@"Failed to vend server object"); } else [[NSRunLoop currentRunLoop] run]; [pool drain]; return 0; } Client.m #import <Foundation/Foundation.h> #import "Protocol.h" int main() { unsigned i = 0; for (; i < 3; i ++) { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; id server = [NSConnection rootProxyForConnectionWithRegisteredName:@"test-server" host:nil]; [server setProtocolForProxy:@protocol(DOServer)]; NSLog(@"Created target: %@", [server createTarget]); [[NSRunLoop currentRunLoop] runUntilDate:[NSDate dateWithTimeIntervalSinceNow:1.0]]; [pool drain]; } return 0; } The issue is that any remote objects created by the root proxy are not released when their proxy counterparts in the client go out of scope. According to the documentation: When an object’s remote proxy is deallocated, a message is sent back to the receiver to notify it that the local object is no longer shared over the connection. I would therefore expect that as each DOTarget goes out of scope (each time around the loop) it's remote counterpart would be dellocated, since there is no other reference to it being held on the remote side of the connection. In reality this does not happen: the temporary objects are only deallocate when the client application quits, or more accurately, when the connection is invalidated. I can force the temporary objects on the remote side to be deallocated by explicitly invalidating the NSConnection object I'm using each time around the loop and creating a new one but somehow this just feels wrong. Is this the correct behaviour from DO? Should all temporary objects live as long as the connection that created them? Are connections therefore to be treated as temporary objects which should be opened and closed with each series of requests against the server? Any insights would be appreciated.

    Read the article

  • Memory Leak with Swing Drag and Drop

    - by tom
    I have a JFrame that accepts top-level drops of files. However after a drop has occurred, references to the frame are held indefinitely inside some Swing internal classes. I believe that disposing of the frame should release all of its resources, so what am I doing wrong? Example import java.awt.datatransfer.DataFlavor; import java.io.File; import java.util.List; import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.TransferHandler; public class DnDLeakTester extends JFrame { public static void main(String[] args) { new DnDLeakTester(); //Prevent main from returning or the jvm will exit while (true) { try { Thread.sleep(10000); } catch (InterruptedException e) { } } } public DnDLeakTester() { super("I'm leaky"); add(new JLabel("Drop stuff here")); setTransferHandler(new TransferHandler() { @Override public boolean canImport(final TransferSupport support) { return (support.isDrop() && support .isDataFlavorSupported(DataFlavor.javaFileListFlavor)); } @Override public boolean importData(final TransferSupport support) { if (!canImport(support)) { return false; } try { final List<File> files = (List<File>) support.getTransferable().getTransferData(DataFlavor.javaFileListFlavor); for (final File f : files) { System.out.println(f.getName()); } } catch (Exception e) { e.printStackTrace(); } return true; } }); setDefaultCloseOperation(DISPOSE_ON_CLOSE); pack(); setVisible(true); } } To reproduce, run the code and drop some files on the frame. Close the frame so it's disposed of. To verify the leak I take a heap dump using JConsole and analyse it with the Eclipse Memory Analysis tool. It shows that sun.awt.AppContext is holding a reference to the frame through its hashmap. It looks like TransferSupport is at fault. What am I doing wrong? Should I be asking the DnD support code to clean itself up somehow? I'm running JDK 1.6 update 19.

    Read the article

  • Patterns: Local Singleton vs. Global Singleton?

    - by Mike Rosenblum
    There is a pattern that I use from time to time, but I'm not quite sure what it is called. I was hoping that the SO community could help me out. The pattern is pretty simple, and consists of two parts: A singleton factory, which creates objects based on the arguments passed to the factory method. Objects created by the factory. So far this is just a standard "singleton" pattern or "factory pattern". The issue that I'm asking about, however, is that the singleton factory in this case maintains a set of references to every object that it ever creates, held within a dictionary. These references can sometimes be strong references and sometimes weak references, but it can always reference any object that it has ever created. When receiving a request for a "new" object, the factory first searches the dictionary to see if an object with the required arguments already exits. If it does, it returns that object, if not, it returns a new object and also stores a reference to the new object within the dictionary. This pattern prevents having duplicative objects representing the same underlying "thing". This is useful where the created objects are relatively expensive. It can also be useful where these objects perform event handling or messaging - having one object per item being represented can prevent multiple messages/events for a single underlying source. There are probably other reasons to use this pattern, but this is where I've found this useful. My question is: what to call this? In a sense, each object is a singleton, at least with respect to the data it contains. Each is unique. But there are multiple instances of this class, however, so it's not at all a true singleton. In my own personal terminology, I tend to call the factory method a "global singleton". I then call the created objects "local singletons". I sometimes also say that the created objects have "reference equality", meaning that if two variables reference the same data (the same underlying item) then the reference they each hold must be to the same exact object, hence "reference equality". But these are my own invented terms, and I am not sure that they are good ones. Is there standard terminology for this concept? And if not, could some naming suggestions be made? Thanks in advance...

    Read the article

< Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >