Search Results

Search found 8037 results on 322 pages for 'hardware hacking'.

Page 120/322 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • Licensing your code in Mono

    - by Jerry
    I'm working with some code in Visual Studio. My parter-in-crime fellow developer has suggested that the code also be available to work under Mono. I'm impresed witht he work that is already done in Mono, but I'm very new to Mono, so I don't know what it can/cannot do. I've already written a class in C# using the .NET LicenseManager object. It writes to the windows registry, so I know I'll have to modify it so that it will use some compiler flags like #if Win32 or #if MONO. My question is two-fold: 1) Does Mono implement the same LicenseManager class structure? 2) If so, how do you guys lock down your code using LicenseManager in Linux? (i.e. Write to files, use a hardware dongle, compare to hardware serials, etc??)

    Read the article

  • How would I access the Windows Login (Authentication) API from a C++ Service Application?

    - by Gabriel
    Let us imagine for a moment that I have a piece of hardware that can act as an authentication for a user on a given system. I want to write an application in C++ to run as a service, look for this device and if found log the appropriate user in. I believe I have found the API's I would need to use to perform the hardware and service portions of the application but am having a hard time nailing down a way to create a "real" user login. Is this possible? If so where would I look to find resources on accomplishing this? I think of it as being an analog to fingerprint scanner login type devices.

    Read the article

  • How to run Erlang based robot? Is it possible to convert it into .hex and run over microcontroller?

    - by Dinesh
    I am working on Erlang robotic project. I have made a wallfollower robot program which has two files 1. a C program to communicate with hardware(I think we can not directly use Erlang for this) and 2. Erlang program to call these functions. I want to know where(platforms) I can run this robot. Is it possible to run this robot over micro-controller(8051 or ARM7) based hardware? Is it possible to convert Erlang program into C code or directly into .hex file? If any one have any idea please help asap. Thanks.

    Read the article

  • How does Linux blocking I/O actually work?

    - by tgguy
    In Linux, when you make a blocking i/o call like read or accept, what actually happens? My thoughts: the process get taken out of the run queue, put into a waiting or blocking state on some wait queue. Then when a tcp connection is made (for accept) or the hard drive is ready or something for a file read, a hardware interrupt is raised which lets those processes waiting to wake up and run (in the case of a file read, how does linux know what processes to awaken, as there could be lots of processes waiting on different files?). Or perhaps instead of hardware interrupts, the individual process itself polls to check availability. Not sure, help?

    Read the article

  • Android Internet permission and Google Play filtering

    - by Ivan
    I added <uses-permission android:name="android.permission.INTERNET" /> to my manifest in order to have access to Internet, but this is not a main function of my app. So, I don't want to get filtered in Google Play because of this. There is no matching <uses-feature> for this, so my question is what do I need to add with required="false" to avoid filtering. I guess I could add <uses-feature android:name="android.hardware.wifi" android:required="false" /> but what about mobile Internet (3G/4G), do I also need this? <uses-feature android:name="android.hardware.telephony" android:required="false"/> I want to know which filtering android.permission.INTERNET adds on Google Play, if it adds something.

    Read the article

  • How to set a filter for an estimated maximum price

    - by David
    I cannot figure out how to set an estimated maximum price for a collection of records. What I want to avoid is to simply use SQL MAX, because maybe there are records with exorbitant prices. For example, in the "computers-hardware" category of OLX (http://www.olx.com/computers-hardware-cat-240) the filter for maximum price is estimately set to $1400, but sorting by price, the first items are above $10000 Maybe they calculated the average and then estimated some maximum price... what do you think? And what about the stepping? How would you calculate it?

    Read the article

  • What Windows Form control would be a good fit for this use case?

    - by Sergio Tapia
    I'm going create an open source Help desk solution free of charge for small to medium businesses to use. I'm currently working on the client application. I want to have a list of tickets that have been opened by the user. So it would be like a table TicketsByUser: Ticket Number | Type | Description | Date | Handled? 123456 | Hardware | My mouse broke | 10/20/2010 | No 123456 | Hardware | My mouse broke | 10/20/2010 | Yes I was thinking of using ListView because of it's name, but I have zero experience with it, so maybe it's not what I'm looking for. I'm going to be pulling the data from a WCF service which in turn pulls it from a MS SQL database.

    Read the article

  • OpenGL embedded in gtk has colour badly displayed

    - by Sardathrion
    Note that this is a re-write now that I have more clues as to where the problem could be... I am creating a GTK GUI which contains two embedded OpenGL displays. Both use the same shader code (complied once for each). On my normal hardware, this works fine. On a virtual machine running on the same hardware, I get horrible colours -- see images. I suspect that the shader code is at fault -- certainly dropping a simpler shader does make the problem moot. However, I do need both diffuse and spot lights in my shader thus making it non-trivial. Anyone has seen this before?

    Read the article

  • Sending an SMS myself

    - by user246114
    Hi, I'm taking shots in the dark here. I'd like to create a web service where eventually I send an SMS by using my own hardware. I'm not sure what I need in order to send an SMS myself. I don't want to use any of the existing SMS send services out there, I need to be able to send these SMS myself. It looks like there's one opensource project in particular that deals with this, "Kannel": http://www.kannel.org/ what I don't understand is, do I need to get a GSM modem to be able to send SMS? Do SMS gateways (like Kannel) eventually need to get to a GSM modem to send messages, or is there some other hardware you need to be able to actually send the messages? Thanks

    Read the article

  • Eliminating false dependencies

    - by Klaus
    Hi all, I have a quite general question regarding false dependencies. As the name implies, those or no real dependencies and can be elimianated. I am aware of techniqes called register renaming that eliminate such dependencies at a hardware level. Of course I could eliminate those already at a "higher" level when writing assembler code that avoids false dependencies. But now I am wondering if also the compiler provides support to keep the number of false dependencies low or does it more rely on the hardware to eliminate those? Thanks

    Read the article

  • How to write configurable Embedded C code which can run on multihardware platform

    - by Adnan
    Hello all , What are the techniques used to write an embedded C software which has multi features. Features can be configurable for multi-hardware platform. I have developed a firmware based on a RTOS for ARM7. Now i want to make it a baseline firmware which can be used with similar, more or less features (configurable) on different microcontrollers, like MSP, or AVR etc. Being more specific, if i want to change different features of firmware for one hardware and others for the second. What technique should i adopt and is there any study material available. Regards

    Read the article

  • how to access camera.java in on cick event?

    - by Srikanth Naidu
    hi , i am making a app which takes photo on button click i have camera.java which operates camera and takes photo how to i call it on the below event? public void onClick(DialogInterface arg0, int arg1) { setContentView(R.layout.startcamera); } Camera .java package neuro.com; import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.IOException; import android.app.Activity; import android.hardware.Camera; import android.hardware.Camera.PictureCallback; import android.hardware.Camera.ShutterCallback; import android.os.Bundle; import android.util.Log; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.widget.FrameLayout; public class CameraDemo extends Activity { private static final String TAG = "CameraDemo"; Camera camera; Preview preview; Button buttonClick; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.startcamera); preview = new Preview(this); ((FrameLayout) findViewById(R.id.preview)).addView(preview); buttonClick = (Button) findViewById(R.id.buttonClick); buttonClick.setOnClickListener( new OnClickListener() { public void onClick(View v) { preview.camera.takePicture(shutterCallback, rawCallback, jpegCallback); } }); Log.d(TAG, "onCreate'd"); } ShutterCallback shutterCallback = new ShutterCallback() { public void onShutter() { Log.d(TAG, "onShutter'd"); } }; /** Handles data for raw picture */ PictureCallback rawCallback = new PictureCallback() { public void onPictureTaken(byte[] data, Camera camera) { Log.d(TAG, "onPictureTaken - raw"); } }; /** Handles data for jpeg picture */ PictureCallback jpegCallback = new PictureCallback() { public void onPictureTaken(byte[] data, Camera camera) { FileOutputStream outStream = null; try { // write to local sandbox file system // outStream = CameraDemo.this.openFileOutput(String.format("%d.jpg", System.currentTimeMillis()), 0); // Or write to sdcard outStream = new FileOutputStream(String.format("/sdcard/%d.jpg", System.currentTimeMillis())); outStream.write(data); outStream.close(); Log.d(TAG, "onPictureTaken - wrote bytes: " + data.length); } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } finally { } Log.d(TAG, "onPictureTaken - jpeg"); } }; }

    Read the article

  • write cache and write sequence order

    - by excanoe
    ok, here i have some weird question: let say we have some binary file (.log), and sequence of write operations, for example log1, log2, log3 and each has some block size n (raw data). question: can I be sure that log1,log2 and log3 sequences can be written in the correct order in ONE file, even if i have few cache levels (disk hardware and os level)? update very interested in what will be with records order (not with records) if we have software or hardware failure (reboot or another reason). update there can be some percent of write failures, but main question is: will write order stay correct?

    Read the article

  • ASP.NET MVC4: How to convert an IEnumerable to a string for ViewBag

    - by sehummel
    This is what I'm trying to do, but it doesn't work: HardwareType hwt = new HardwareType { HType = "PC" }; IEnumerable<Hardware> Pcs = db.Hardware.Where(h => h.HardwareType.Contains(hwt)); ViewBag.Pcs = Pcs.ToString(); So how do I convert my IEnumerable to a string (or other primitive data type) so the compiler won't give me an error when I try to use it in my Razor? @foreach (var item in ViewBag.Pcs) { <li><a href="#" class="btn"><i class="icon-hdd"></i> @item.HType</a></li> }

    Read the article

  • Python frequency detection

    - by Tsuki
    Ok what im trying to do is a kind of audio processing software that can detect a prevalent frequency an if the frequency is played for long enough (few ms) i know i got a positive match. i know i would need to use FFT or something simiral but in this field of math i suck, i did search the internet but didn not find a code that could do only this. the goal im trying to accieve is to make myself a custom protocol to send data trough sound, need very low bitrate per sec but im also very limited on the transmiting end so the recieving software will need to be able custom (cant use an actual hardware/software modem) also i want this to be software only (no additional hardware except soundcard) thanks alot for the help.

    Read the article

  • Elegant way to add functionallity to previously defined functions

    - by Bastiaan
    How to combine two functions together I have a class controlling some hardware: class Heater() def set_power(self,dutycycle, period) ... def turn_on(self) ... def turn_off(self) And a class that connects to a database and handles all data logging fuctionallity for an experiment: class DataLogger() def __init__(self) # Record measurements and controls in a database def start(self,t) # Starts a new thread to aqcuire and reccord measuements every t secconds Now, in my program recipe.py I want to do something like: log = DataLogger() @DataLogger_decorator H1 = Heater() log.start(60) H1.set_power(10,100) H1.turn_on() sleep(10) H1.turn_off() etc Where all actions on H1 are recorded by the datalogger. I can change any of the classes involved, just looking for an elegant way to do this. Ideally the hardware functions remain separated from the database and DataLogger functions. And ideally the DataLogger is reusable for other controls and measurements.

    Read the article

  • How much market shares OpenGL2.0 in iPhone os hardwares(iPhone/iPot Touch)

    - by Eonil
    I'm planning making a game for AppStore, so I'm studying GLES. But, GLES 1.1 and 2.0 APIs are different about handling in some features.(and limitations) I have not enough time to consider both of them, I have to choosing one. 2.0 is clearly better in developer's view, but I'm worry about it's market share. I wish most users moved on newer SGX based hardware, but in fact, I don't know. Does anybody have information about location of those hardware ratio data in iPhone OS supported hardwares? (iPhone/iPod touch, per GPU) Please let me know.

    Read the article

  • Software buttons in the android 4.2 emulator not showing up (tablet)

    - by The_Unknown
    I'm getting crazy right now. I just installed the latest android adt bundle from http://developer.android.com/sdk/index.html. It's version v21.0.0. Now I wanted to test my tablet app (designed for 10.1" xlarge mdpi) in the emulator, but I cannot get any software buttons for home/back/... Here's my device configuration. This config is afterwards assigned to the avd. http://i.stack.imgur.com/Q7xvP.png Hardware-buttons don't work as well (you cannot set hardware buttons in a tablet-like emulator). The target api is level 15 (android 4.0.3). I searched stackoverflow, but didn't find any help concerning the latest version of android. Since time's running away a little, any quick help would be great! Thanks in advance. Bye The_Unknown

    Read the article

  • How can I determine if a specified string is in a specific MySQL column? (and also perhaps a tutoria

    - by Rob
    This is a fairly simple question. Basically, I'm having a program send HardWare ID's to my PHP script as GET data. I need the PHP script check to make sure that HardWare ID is in a specific MySQL column, and if it is, { continue the script, } else { exit(); } Problem is I'm not too good with MySQL and have no idea how to do this. However, I feel that I should know this by now, so if someone could also link me to a good tutorial site for MySQL, that kind of keeps it "humanized" if you know what I mean. One that "dumbs it down." I'm not dumb or anything, I just get sidetracked easily, and if all its doing is showing me code and not explaining it, I won't pick it up. If you don't have any tutorial sites off the top of your head, I'll settle for help with the first question, and try to hunt down a tutorial later.

    Read the article

  • Windows: what is the difference between DEP always on and DEP opt-out with no exceptions?

    - by Peter Mortensen
    What is the difference between DEP always on ("/NoExecute=AlwaysOn" in boot.ini) and DEP opt-out ( "/NoExecute=OptOut" in boot.ini) with no exceptions? "no exceptions" = empty list of programs for which DEP does not apply. DEP = Data Execution Prevention (hardware). One would expect it to work the same way, but it makes a difference for some applications. E.g. for all versions of UltraEdit 14 (14.2). It crashes at startup for DEP always on, at least on Microsoft Windows XP Professional Edition x64 edition. (2010-03-11: this problem has been fixed with UltraEdit 15.2 and later.) Update 1: I think this difference is caused by the backdoors that Microsoft has put into hardware DEP for OptOut, according to Fabrice Roux (see below). In the case of IrfanView, for which Steve Gibson observed the same difference as I did for UltraEdit (see below), the difference is caused by a non-DEP aware EXE packer (ASPack) that Microsoft coded a backdoor for. Is there a difference between Windows XP, Windows Vista and Windows 7 ? Is there a difference between 32 bit and 64 bit versions of Windows ? Sources: From [http://blog.fabriceroux.com/index.php/2007/02/26/hardware_dep_has_a_backdoor?blog=1], "Hardware DEP has a backdoor" by Fabrice Roux. 2007-02-26. "IrfanView was not using any trick to evade DEP ... Microsoft just coded a backdoor used only in OPTOUT. Bascially Microsoft checks the executable header for a section matching one of the 3 strings. If one these strings is found, DEP will be turned OFF for this application by windows. ... 'aspack', 'pcle', 'sforce'" From [http://www.grc.com/sn/sn-078.htm], by Steve Gibson. "I can’t find any documentation on Microsoft’s site anywhere, because we’re seeing a difference between always-on and opt-out. That is, you would imagine that always-on mode would be the same as opting out if you weren’t having any opt-out programs. It turns out it’s not the case. For example ... the IrfanView file viewer ... runs fine in opt-out mode, even if it has not been opted out. But it won’t launch, Windows blocks it from launching ... in always-on mode." From [http://www.grc.com/sn/sn-083.htm], by Steve Gibson. "... IrfanView ... won’t run with DEP turned on. It’s because it uses an EXE packer, an executable compression program called ASPack. And it makes sense that it wouldn’t because naturally an executable compressor has got to decompress the executable, so it allocates a bunch of data memory into which it decompresses the compressed executable, and then it runs it. Well, it’s running a data allocation, which is exactly what DEP is designed to stop. On the other hand, UPX, which is actually the leading and most popular EXE compressor, it’s DEP- compatible because those guys realized, hey, when we allocate this memory, we should mark the pages as executable."

    Read the article

  • Oracle performance problem

    - by jreid42
    We are using an Oracle 11G machine that is very powerful; has redundant storage etc. It's a beast from what I have been told. We just got this DB for a tool that when I first came on as a coop had like 20 people using, now its upwards of 150 people. I am the only one working on it :( We currently have a system in place that distributes PERL scripts across our entire data center essentially giving us a sort of "grid" computing power. The Perl scripts run a sort of simulation and report back the results to the database. They do selects / inserts. The load is not very high for each script but it could be happening across 20-50 systems at the same time. We then have multiple data centers and users all hitting the same database with this same approach. Our main problem with this is that our database is getting overloaded with connections and having to drop some. We sometimes have upwards of 500 connections. These are old perl scripts and they do not handle this well. Essentially they fail and the results are lost. I would rather avoid having to rewrite a lot of these as they are poorly written, and are a headache to even look at. The database itself is not overloaded, just the connection overhead is too high. We open a connection, make a quick query and then drop the connection. Very short connections but many of them. The database team has basically said we need to lower the number of connections or they are going to ignore us. Because this is distributed across our farm we cant implement persistent connections. I do this with our webserver; but its on a fixed system. The other ones are perl scripts that get opened and closed by the distribution tool and thus arent always running. What would be my best approach to resolving this issue? The scripts themselves can wait for a connection to be open. They do not need to act immediately. Some sort of queing system? I've been suggested to set up a few instances of a tool called "SQL Relay". Maybe one in each data center. How reliable is this tool? How good is this approach? Would it work for what we need? We could have one for each data center and relay requests through it to our main database, keeping a pipeline of open persistent connections? Does this make sense? Is there any other suggestions you can make? Any ideas? Any help would be greatly appreciated. Sadly I am just a coop student working for a very big company and somehow all of this has landed all on my shoulders (there is literally nobody to ask for help; its a hardware company, everybody is hardware engineers, and the database team is useless and in India) and I am quite lost as what the best approach would be? I am extremely overworked and this problem is interfering with on going progress and basically needs to be resolved as quickly as possible; preferably without rewriting the whole system, purchasing hardware (not gonna happen), or shooting myself in the foot. HELP LOL!

    Read the article

  • Dell Latitude E6430 Docking Station + Dual Monitor + Laptop Screen Tri-Monitor setup

    - by Larry
    I have a company issued laptop and docking station as well as two monitors The specifications of the hardware are as follows; Laptop: Latitude E6430 BIOS: A02.00 Processor: i7-3720QM CPU @ 2.60 (8 CPUs) Memory: 4096MB RAM Page file: 1825MB used, 4793MB available DirectX 11 Display Driver/Chip: MVIDIA NVS 5200M DAC: Integrated RAMDAC Aprox Total Memory: 2376 (Above 3 details same for both displays) Current Display Mode (Display 1): 1600x900 Current Display Mode (Display 2): 1440x900 the docking station is a Dell Latitude E6420 Docking Station PR03X Port Replicator and I don't think the monitor model is particularly relevant to resolving this issue but they are both Acer V193Ws The story goes like this; the laptop works fine if I VGA one monitor into the laptop through the vga port on the back of the lefthand side of the laptop I can achieve dual monitor display fine (laptop screen + monitor) if I plug the laptop into the docking station and use the vga port in the back of the docking station I can dual monitor fine (laptop screen + monitor) if I plug the laptop into the docking station, the laptop's lefthand side VGA port no longer seems to function at all I've spoken to internal IT about this issue and they're going to get me some kind of VGA splitter or a DVI-VGA adapter to use with the docking station for the second Acer Monitor, but that isn't going to happen for a few days. So I guess what I'm wondering is; is there any way to continue to use the side VGA port on my laptop while using the docking station VGA port? and as a secondary 'followup' pending resolution of the initial issue with getting both monitors up and running (at the moment I have both monitors on my desk but am just using my laptop screen as one of my dual monitor display with one of the monitors [the one connected to dock]), is there any way to CONTINUE to use my laptop monitor to in effect have a triple monitor display (2 monitors + docked laptop)? I am wondering this because internal IT told me that they were aware of some issues with the particular display drivers in my box and triple monitor displays but weren't really going to look TOO much in-depth into that (which is perfectly understandable) since getting the adapter for the dual monitors up and running was the greater priority within their purview. So this is a two parter; Can I dual monitor using two vga cables with 1 docking station vga port and one laptop vga port? is there a setting that can be tweaked somewhere? because plugging the box into the station seems to make the side port stop working and... Is there any reasonably simple and cost-effective work around (e.g. I am find with shelling out maybe a few dollars out of my own pocket for some hardware or software to make my company box tri-display capable) but if this requires some extensive rebuild or new OSs or doing stuff to the BIOS I'd rather have a straight answer about this being untenable as a slight modification to a (once again) company laptop and stop wasting time looking into it Thanks! and please let me know if you guys need any more details (tech specs or something) to answer this question [EDIT] 2/10/2014 Just an update; turned out it really was just a hardware limitation issue. The old laptop just couldn't hack it. Got a new laptop with a better video card and different monitors from my company and am successfully using a triple display currently (2 monitors + laptop screen)

    Read the article

  • Network config / gear question

    - by mcgee1234
    I have been tasked with setting up a fairly straightforward rack in a data center (we do not even need a whole rack, but this is the smallest allotment available). In a nutshell, 4 to 6 servers need to be able to reach 2 (maybe 3) vendors. The servers needs to be reachable over the internet. A little more detail - the networks the servers need to reach are inside of the data center, and are "trusted". Connections to these networks will be achieved through intra data center cross connects. It is kind of like a manufacturing line where we receive data from one vendor (burst-able up to 200 Mbits), churn through it on the servers, and then send out data to another vendor (bursts up to 20 Mbits). This series of events is very latency sensitive, so much so that it is common practice not to use NAT or a firewall on these segments (or so I hear). To reach the servers over the internet, I plan to use a site to site VPN. (This part is only relevant as far as hardware selection goes). I have 2 configurations in mind: Cisco 2911 (2921) (with the additional wan ports module) and a layer 2 switch - in this scenario, I would use the router also for VPN. Cisco 3560 layer 3 switch to interconnect the networks inside of the data center and an ASA 5510 (which is total overkill, but the 5505 is not rack mountable) as a firewall for the Wan side (internet) and VPN. I envision the setup to be as follows: Internet - ASA - 3560 Vendors - 3560 - Servers The general idea is that the ASA acts as a firewall and VPN device and the 3560 does all the heavy lifting. The first is a fairly traditional setup but my concern is performance. The second is somewhat unorthodox in that the vendors are directly connected to the layer 3 switch without passing through a firewall. Based on my understanding however, a layer 3 switch will perform substantially better as it will do hardware (ASIC) vs. software switching. (Note that number 2 is a little over the budget, but not unworkable (double negative, ugh)) Since this is my first time dealing with a data center, I am not sure what the IP space is going to look like. I suspect I will retain a block(s) of public IPs, vlan them to individual interfaces for the vendor connections and the servers (which will not reachable from the wan side of course) and setup routing on the switch. So here are my questionss: Is there a substantial performance difference between 1 and 2, i.e. hardware based switching on a layer 3 vs a software base on the 2911? I have trolled the internet and found a lot of Cisco literature, but nothing that I could really use to get a good handle. The vendors we connect to are secure and trusted (famous last words) and as I understand it, it is common practice not to NAT or firewall these connections (because of the aforementioned latency sensitivity). But what what kind of latency are we really talking about if I push the data through a router (or even ASA for that matter)? For our purposes, 5 ms will not kill us, 20 or 30 can be very costly. Others measure in microseconds, but they are out of our league. Is there any issues with using public IPs on a layer 3 switch? I am certainly not married to either of these configs, and I am totally open to any ideas. My knowledge (and I use the term loosely) is largely from books so I welcome any advice / insight. Thanks in advance.

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >