Search Results

Search found 789 results on 32 pages for 'chip castle'.

Page 29/32 | < Previous Page | 25 26 27 28 29 30 31 32  | Next Page >

  • .NET Web Service hydrate custom class

    - by row1
    I am consuming an external C# Web Service method which returns a simple calculation result object like this: [Serializable] public class CalculationResult { public string Name { get; set; } public string Unit { get; set; } public decimal? Value { get; set; } } When I add a Web Reference to this service in my ASP .NET project Visual Studio is kind enough to generate a matching class so I can easily consume and work with it. I am using Castle Windsor and I may want to plug in other method of getting a calculation result object, so I want a common class CalculationResult (or ICalculationResult) in my solution which all my objects can work with, this will always match the object returned from the external Web Service 1:1. Is there anyway I can tell my Web Service client to hydrate a particular class instead of its generated one? I would rather not do it manually: foreach(var fromService in calcuationResultsFromService) { ICalculationResult calculationResult = new CalculationResult() { Name = fromService.Name }; yield return calculationResult; } Edit: I am happy to use a Service Reference type instead of the older Web Reference.

    Read the article

  • Compile time float packing/punning

    - by detly
    I'm writing C for the PIC32MX, compiled with Microchip's PIC32 C compiler (based on GCC 3.4). My problem is this: I have some reprogrammable numeric data that is stored either on EEPROM or in the program flash of the chip. This means that when I want to store a float, I have to do some type punning: typedef union { int intval; float floatval; } IntFloat; unsigned int float_as_int(float fval) { IntFloat intf; intf.floatval = fval; return intf.intval; } // Stores an int of data in whatever storage we're using void StoreInt(unsigned int data, unsigned int address); void StoreFPVal(float data, unsigned int address) { StoreInt(float_as_int(data), address); } I also include default values as an array of compile time constants. For (unsigned) integer values this is trivial, I just use the integer literal. For floats, though, I have to use this Python snippet to convert them to their word representation to include them in the array: import struct hex(struct.unpack("I", struct.pack("f", float_value))[0]) ...and so my array of defaults has these indecipherable values like: const unsigned int DEFAULTS[] = { 0x00000001, // Some default integer value, 1 0x3C83126F, // Some default float value, 0.005 } (These actually take the form of X macro constructs, but that doesn't make a difference here.) Commenting is nice, but is there a better way? It's be great to be able to do something like: const unsigned int DEFAULTS[] = { 0x00000001, // Some default integer value, 1 COMPILE_TIME_CONVERT(0.005), // Some default float value, 0.005 } ...but I'm completely at a loss, and I don't even know if such a thing is possible. Notes Obviously "no, it isn't possible" is an acceptable answer if true. I'm not overly concerned about portability, so implementation defined behaviour is fine, undefined behaviour is not (I have the IDB appendix sitting in front of me). As fas as I'm aware, this needs to be a compile time conversion, since DEFAULTS is in the global scope. Please correct me if I'm wrong about this.

    Read the article

  • How can I write a unit test to determine whether an object can be garbage collected?

    - by driis
    In relation to my previous question, I need to check whether a component that will be instantiated by Castle Windsor, can be garbage collected after my code has finished using it. I have tried the suggestion in the answers from the previous question, but it does not seem to work as expected, at least for my code. So I would like to write a unit test that tests whether a specific object instance can be garbage collected after some of my code has run. Is that possible to do in a reliable way ? EDIT I currently have the following test based on Paul Stovell's answer, which succeeds: [TestMethod] public void ReleaseTest() { WindsorContainer container = new WindsorContainer(); container.Kernel.ReleasePolicy = new NoTrackingReleasePolicy(); container.AddComponentWithLifestyle<ReleaseTester>(LifestyleType.Transient); Assert.AreEqual(0, ReleaseTester.refCount); var weakRef = new WeakReference(container.Resolve<ReleaseTester>()); Assert.AreEqual(1, ReleaseTester.refCount); GC.Collect(); GC.WaitForPendingFinalizers(); Assert.AreEqual(0, ReleaseTester.refCount, "Component not released"); } private class ReleaseTester { public static int refCount = 0; public ReleaseTester() { refCount++; } ~ReleaseTester() { refCount--; } } Am I right assuming that, based on the test above, I can conclude that Windsor will not leak memory when using the NoTrackingReleasePolicy ?

    Read the article

  • Is it possible to send OSC commands to an iPad via the Camera Connection Kit?

    - by HELVETICADE
    I'm building a small controller device that I'd like to partner with a computer. I've settled on using OSC out from my custom built hardware and am pretty satisfied with what I can get from WOscLib. Two goals I'd like to achieve are portability and a very nice ratio between battery:computing power, and this has lured me towards using iPhoneOS to accomplish my goals. I think the iPad would suit my needs perfectly, except that using wifi to broadcast OSC out from my device requires that device to be connected to a third device with a wifi chip, and this would destroy the goal of portability, whilst also introducing potential latency and stability headaches. My question is pretty simple: Can I push OSC commands FROM my controller TO an iPad via USB and the Camera Connection Kit? If I could accomplish this, the two major goals of my project would be fulfilled very nicely. This seems like it should be a simple little question, but researching this obsessively over the past few weeks has left me more almost more uncertain than if I had done no research at all. I'd really like some more confidence before I go down this route, and it seems like it should be possible. Any insight would be very, very appreciated.

    Read the article

  • DataSets to POCOs - an inquiry regarding DAL architecture

    - by alexsome
    Hello all, I have to develop a fairly large ASP.NET MVC project very quickly and I would like to get some opinions on my DAL design to make sure nothing will come back to bite me since the BL is likely to get pretty complex. A bit of background: I am working with an Oracle backend so the built-in LINQ to SQL is out; I also need to use production-level libraries so the Oracle EF provider project is out; finally, I am unable to use any GPL or LGPL code (Apache, MS-PL, BSD are okay) so NHibernate/Castle Project are out. I would prefer - if at all possible - to avoid dishing out money but I am more concerned about implementing the right solution. To summarize, there are my requirements: Oracle backend Rapid development (L)GPL-free Free I'm reasonably happy with DataSets but I would benefit from using POCOs as an intermediary between DataSets and views. Who knows, maybe at some point another DAL solution will show up and I will get the time to switch it out (yeah, right). So, while I could use LINQ to convert my DataSets to IQueryable, I would like to have a generic solution so I don't have to write a custom query for each class. I'm tinkering with reflection right now, but in the meantime I have two questions: Are there any problems I overlooked with this solution? Are there any other approaches you would recommend to convert DataSets to POCOs? Thanks in advance.

    Read the article

  • trying to append a list, but something breaks

    - by romunov
    I'm trying to create an empty list which will have as many elements as there are num.of.walkers. I then try to append, to each created element, a new sub-list (length of new sub-list corresponds to a value in a. When I fiddle around in R everything goes smooth: list.of.dist[[1]] <- vector("list", a[1]) list.of.dist[[2]] <- vector("list", a[2]) list.of.dist[[3]] <- vector("list", a[3]) list.of.dist[[4]] <- vector("list", a[4]) I then try to write a function. Here is my feeble attempt that results in an error. Can someone chip in what am I doing wrong? countNumberOfWalks <- function(walk.df) { list.of.walkers <- sort(unique(walk.df$label)) num.of.walkers <- length(unique(walk.df$label)) #Pre-allocate objects for further manipulation list.of.dist <- vector("list", num.of.walkers) a <- c() # Count the number of walks per walker. for (i in list.of.walkers) { a[i] <- nrow(walk.df[walk.df$label == i,]) } a <- as.vector(a) # Add a sublist (length = number of walks) for each walker. for (i in i:num.of.walkers) { list.of.dist[[i]] <- vector("list", a[i]) } return(list.of.dist) } > num.of.walks.per.walker <- countNumberOfWalks(walk.df) Error in vector("list", a[i]) : vector size cannot be NA

    Read the article

  • This is a great job opportunity!!! [closed]

    - by Stuart Gordon
    ASP.NET MVC Web Developer / London / £450pd / £25-£50,000pa / Interested contact [email protected] ! As a web developer within the engineering department, you will work with a team of enthusiastic developers building a new ASP.NET MVC platform for online products utilising exciting cutting edge technologies and methodologies (elements of Agile, Scrum, Lean, Kanban and XP) as well as developing new stand-alone web products that conform to W3C standards. Key Responsibilities and Objectives: Develop ASP.NET MVC websites utilising Frameworks and enterprise search technology. Develop and expand content management and delivery solutions. Help maintain and extend existing products. Formulate ideas and visions for new products and services. Be a proactive part of the development team and provide support and assistance to others when required. Qualification/Experience Required: The ideal candidate will have a web development background and be educated to degree level in a Computer Science/IT related course plus ASP.NET MVC experience. The successful candidate needs to be able to demonstrate commercial experience in all or most of the following skills: Essential: ASP.NET MVC with C# (Visual Studio), Castle, nHibernate, XHTML and JavaScript. Experience of Test Driven Development (TDD) using tools such as NUnit. Preferable: Experience of Continuous Integration (TeamCity and MSBuild), SQL Server (T-SQL), experience of source control such as Subversion (plus TortioseSVN), JQuery. Learn: Fluent NHibernate, S#arp Architecture, Spark (View engine), Behaviour Driven Design (BDD) using MSpec. Furthermore, you will possess good working knowledge of W3C web standards, web usability, web accessibility and understand the basics of search engine optimisation (SEO). You will also be a quick learner, have good communication skills and be a self-motivated and organised individual.

    Read the article

  • Unit testing an MVC action method with a Cache dependency?

    - by Steve
    I’m relatively new to testing and MVC and came across a sticking point today. I’m attempting to test an action method that has a dependency on HttpContext.Current.Cache and wanted to know the best practice for achieving the “low coupling” to allow for easy testing. Here's what I've got so far... public class CacheHandler : ICacheHandler { public IList<Section3ListItem> StateList { get { return (List<Section3ListItem>)HttpContext.Current.Cache["StateList"]; } set { HttpContext.Current.Cache["StateList"] = value; } } ... I then access it like such... I'm using Castle for my IoC. public class ProfileController : ControllerBase { private readonly ISection3Repository _repository; private readonly ICacheHandler _cache; public ProfileController(ISection3Repository repository, ICacheHandler cacheHandler) { _repository = repository; _cache = cacheHandler; } [UserIdFilter] public ActionResult PersonalInfo(Guid userId) { if (_cache.StateList == null) _cache.StateList = _repository.GetLookupValues((int)ELookupKey.States).ToList(); ... Then in my unit tests I am able to mock up ICacheHandler. Would this be considered a 'best practice' and does anyone have any suggestions for other approaches? Thanks in advance. Cheers

    Read the article

  • How do the major C# DI/IoC frameworks compare?

    - by Slomojo
    At the risk of stepping into holy war territory, What are the strengths and weaknesses of these popular DI/IoC frameworks, and could one easily be considered the best? ..: Ninject Unity Castle.Windsor Autofac StructureMap Are there any other DI/IoC Frameworks for C# that I haven't listed here? In context of my use case, I'm building a client WPF app, and a WCF/SQL services infrastructure, ease of use (especially in terms of clear and concise syntax), consistent documentation, good community support and performance are all important factors in my choice. Update: The resources and duplicate questions cited appear to be out of date, can someone with knowledge of all these frameworks come forward and provide some real insight? I realise that most opinion on this subject is likely to be biased, but I am hoping that someone has taken the time to study all these frameworks and have at least a generally objective comparison. I am quite willing to make my own investigations if this hasn't been done before, but I assumed this was something at least a few people had done already. Second Update: If you do have experience with more than one DI/IoC container, please rank and summarise the pros and cons of those, thank you. This isn't an exercise in discovering all the obscure little containers that people have made, I'm looking for comparisons between the popular (and active) frameworks.

    Read the article

  • Building an 'Activation Key' Generator in JAVA

    - by jax
    I want to develop a Key generator for my phone applications. Currently I am using an external service to do the job but I am a little concerned that the service might go offline one day hence I will be in a bit of a pickle. How authentication works now. Public key stored on the phone. When the user requests a key the 'phone ID' is sent to the "Key Generation Service" and the encrypted key key is returned and stored inside a license file. On the phone I can check if the key is for the current phone by using a method getPhoneId() which I can check with the the current phone and grant or not grant access to features. I like this and it works well, however, I want to create my own "Key Generation Service" from my own website. Requirements: Public and Private Key Encryption:(Bouncy Castle) Written in JAVA Must support getApplicationId() (so that many applications can use the same key generator) and getPhoneId() (to get the phone id out of the encrypted license file) I want to be able to send the ApplicationId and PhoneId to the service for license key generation. Can someone give me some pointers on how to accomplish this? I have dabbled around with some java encryption but am definitely no expert and can't find anything that will help me. A list of the Java classes I would need to instantiate would be helpful.

    Read the article

  • .NET Web Serivce hydrate custom class

    - by row1
    I am consuming an external C# Web Service method which returns a simple calculation result object like this: [Serializable] public class CalculationResult { public string Name { get; set; } public string Unit { get; set; } public decimal? Value { get; set; } } When I add a Web Reference to this service in my ASP .NET project Visual Studio is kind enough to generate a matching class so I can easily consume and work with it. I am using Castle Windsor and I may want to plug in other method of getting a calculation result object, so I want a common class CalculationResult (or ICalculationResult) in my solution which all my objects can work with, this will always match the object returned from the external Web Service 1:1. Is there anyway I can tell my Web Service client to hydrate a particular class instead of its generated one? I would rather not do it manually: foreach(var fromService in calcuationResultsFromService) { ICalculationResult calculationResult = new CalculationResult() { Name = fromService.Name }; yield return calculationResult; } Edit: I am happy to use a Service Reference type instead of the older Web Reference.

    Read the article

  • What's the best approach for getting into VS2010, C# 4, and WPF if my background is in C++/MFC

    - by Canacourse
    All my past programming experience has been in C++ on VS2003/8, Mostly service based and completely self taught. 2 Years ago I had to create my first real GUI app and (Foolishly) choose MFC. I got the app working but it took a long time & was a bit of a nightmare to learn MCF (and its many shortcomings) but I ended up with a reliable workable app which was difficult to change or extend. Again I have to create another GUI app more complex than the first and again this will be created from scratch and will only ever be used on windows. I had put off learning C# for a long time but not wishing to re-visit MFC have decided that the new application with be birthed in VS2010 and WPF 4 will be the midwife. Trying to avoid the several expensive (Time wise) mistakes I made previously. Im looking for for good books/tutorials on the current versions of C# 4 & WPF 4 and also general advice on the best approach. The application will do several things one of them would persisting info in a SQL DB. So Im thinking LINQ for that? Please chip in...

    Read the article

  • load data from grid row into (pop up) form for editing

    - by user1495457
    I read in Ext JS in Action ( by J. Garcia) that if we have an instance of Ext.data.Record, we can use the form's loadRecord method to set the form's values. However, he does not give a working example of this (in the example that he uses data is loaded into a form through a file called data.php). I have searched many forums and found the following entry helpful as it gave me an idea on how to solve my problem by using form's loadRecord method: load data from grid to form Now the code for my store and grid is as follows: var userstore = Ext.create('Ext.data.Store', { storeId: 'viewUsersStore', model: 'Configs', autoLoad: true, proxy: { type: 'ajax', url: '/user/getuserviewdata.castle', reader: { type: 'json', root: 'users' }, listeners: { exception: function (proxy, response, operation, eOpts) { Ext.MessageBox.alert("Error", "Session has timed-out. Please re-login and try again."); } } } }); var grid = Ext.create('Ext.grid.Panel', { id: 'viewUsersGrid', title: 'List of all users', store: Ext.data.StoreManager.lookup('viewUsersStore'), columns: [ { header: 'Username', dataIndex: 'username' }, { header: 'Full Name', dataIndex: 'fullName' }, { header: 'Company', dataIndex: 'companyName' }, { header: 'Latest Time Login', dataIndex: 'lastLogin' }, { header: 'Current Status', dataIndex: 'status' }, { header: 'Edit', menuDisabled: true, sortable: false, xtype: 'actioncolumn', width: 50, items: [{ icon: '../../../Content/ext/img/icons/fam/user_edit.png', tooltip: 'Edit user', handler: function (grid, rowIndex, colIndex) { var rec = userstore.getAt(rowIndex); alert("Edit " + rec.get('username')+ "?"); EditUser(rec.get('id')); } }] }, ] }); function EditUser(id) { //I think I need to have this code here - I don't think it's complete/correct though var formpanel = Ext.getCmp('CreateUserForm'); formpanel.getForm().loadRecord(rec); } 'CreateUserForm' is the ID of a form that already exists and which should appear when user clicks on Edit icon. That pop-up form should then automatically be populated with the correct data from the grid row. However my code is not working. I get an error at the line 'formpanel.getForm().loadRecord(rec)' - it says 'Microsoft JScript runtime error: 'undefined' is null or not an object'. Any tips on how to solve this?

    Read the article

  • NHibernate lazy properties behavior?

    - by GeReV
    I've been trying to get NHibernate into development for a project I'm working on at my workplace. Since I have to put a strong emphasis on performance, I've been running a proof-of-concept stress test on an existing project's table with thousands of records, all of which contain a large text column. However, when selecting a collection of these records, the select statement takes a relatively long time to execute; apparently due to the aforementioned column. The first solution that comes to mind is setting this property as lazy: <property name="Content" lazy="true"/> But there seems to be no difference in the SQL generated by NHibernate. My question is, how do lazy properties behave in NHibernate? Is there some kind of type limitations I could be missing? Should I take a different approach altogether? Using HQL's new Class(column1, column2) approach works, but lazy properties sounds like a simpler solution. It's perhaps worth mentioning I'm using NHibernate 2.1.2GA with the Castle DynamicProxy. Thanks!

    Read the article

  • Still some questions about USB

    - by b-gen-jack-o-neill
    Hi, few days ago I asked here about implementing USB. Now, If I may, would like to ask few more questions, about thing I dind´t quite understood. So, first, If I am right, Windows has device driver for USB interface, for the physical device that sends and receives communication. But what this driver offers to system (user)? I mean, USB protocol is made so its devices are adressed. So you first adress device than send message to it. But how sophisticted is the device controller (HW) and its driver? Is it so sophisticated that it is a chip you just send device adress and data, and it writes the outcomming data out and incomming data to some internal register to be read, or thru DMA directly to memory? Or, how its drivers (SW) really work? Does its driver has some advanced functions like send _data to _device? Becouse I somewhat internally hope there is a way to directly send some data thru USB, maybe by calling USB drivers itself? Is there any good article, tutorial you know about to really explain how all this logic works? Thanks.

    Read the article

  • Where to find new Micro-BTX (uBTX) motherboards? Or should I just replace the box?

    - by John Rudy
    OK, so I'm guessing that it's dead. It's not my machine, and the owner is on a very fixed (IE, none) income. I'm generous, but I'm not that generous, since I already gave him what (at the time) was a fully functional and fairly well-equipped machine. (Aside from the mobo and proc, almost nothing else in it was stock. I'd taken it up to 3GB of RAM, upgraded the hard drive, added a decent video card, installed a wireless adapter, running Vista, etc.) According to further research, the machine uses a Micro-BTX (uBTX) motherboard, and since it's an AMD Athlon64, the AM2 socket. So I'm looking at a few options, and am wondering what's the best route to take? Find an AM2 socket uBTX mobo. I can't find them new online anywhere, leading me to believe that this is an obsolete form factor/chip combination. I don't want a refurb or a system pull because, quite honestly, once I deal with this mess, I don't want to go through it again in another year or two. Find an Intel uBTX mobo and a (relatively -- hah, I still want at least a dual-core) inexpensive Intel CPU. At this point, the only things stock in the machine would be the case and the PSU. :) Buy a bare-bones kit (mobo/proc/PSU/case, sometimes even RAM) from somewhere like CompUSA/TigerDirect or Fry's and move all of the other hardware over. This makes life difficult because the copy of Vista is an upgrade, tied to the copy of XP which shipped on the Gateway, which is OEM and won't install on the new box. :) If I change the CPU brand (AMD to Intel), will I need to reinstall Windows, or can it just be reactivated? Where can I actually find a new, in-box, not system pull, not refurb AM2 uBTX mobo? Do they even exist anymore? What kind of money are we talking (US dollars)? The end goal is to get the machine functional again as cheaply as humanly possible. If it were my own machine, I wouldn't even be asking this, I'd be custom-building a new one. However, it's not mine, I'm shelling out of pocket for the fix (plus the work), and thus want to keep that end price low-low-low.

    Read the article

  • Cliq Wireless questions

    - by Nathan Adams
    Heres the deal: I am by no means a Linux expert, even less when it comes to the Android OS but lets see if we can't solve this problem. The problem I am having is that on the Cliq we have a broadcom chip. In order to use the wireless card you must first insert the module into the kernel. Fine: # insmod /system/lib/dhd.ko insmod /system/lib/dhd.ko # lsmod lsmod dhd 164936 0 - Live 0xbf000000 # BUT netcfg (or ifconfig in busybox) does not recognize that there is a wireless adapter there: # netcfg netcfg lo UP 127.0.0.1 255.0.0.0 0x00000049 dummy0 DOWN 0.0.0.0 0.0.0.0 0x00000082 rmnet0 UP 14.67.164.2 255.255.255.252 0x00001043 rmnet1 DOWN 0.0.0.0 0.0.0.0 0x00001002 rmnet2 DOWN 0.0.0.0 0.0.0.0 0x00001002 usb0 DOWN 0.0.0.0 0.0.0.0 0x00001002 # busybox ifconfig busybox ifconfig lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:282 errors:0 dropped:0 overruns:0 frame:0 TX packets:282 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:18754 (18.3 KiB) TX bytes:18754 (18.3 KiB) rmnet0 Link encap:Ethernet HWaddr EE:83:E8:B4:4A:ED inet addr:14.x.x.x Bcast:14.67.164.3 Mask:255.255.255.252 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:7148 errors:0 dropped:0 overruns:0 frame:0 TX packets:7659 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2609236 (2.4 MiB) TX bytes:908575 (887.2 KiB) # For giggles if we attempt to launch wpa_supplicant anyways we get this: # wpa_supplicant -Dwext -ieth0 -c/data/misc/wifi/wpa_supplicant.conf wpa_supplicant -Dwext -ieth0 -c/data/misc/wifi/wpa_supplicant.conf ioctl[SIOCSIWPMKSA]: No such device ioctl[SIOCSIWMODE]: No such device Could not configure driver to use managed mode ioctl[SIOCGIFFLAGS]: No such device Could not set interface 'eth0' UP ioctl[SIOCGIWRANGE]: No such device ioctl[SIOCGIFINDEX]: No such device CTRL-EVENT-STATE-CHANGE id=-1 state=0 ioctl[SIOCSIWENCODEEXT]: No such device ioctl[SIOCSIWENCODE]: No such device ioctl[SIOCSIWENCODEEXT]: No such device ioctl[SIOCSIWENCODE]: No such device ioctl[SIOCSIWENCODEEXT]: No such device ioctl[SIOCSIWENCODE]: No such device ioctl[SIOCSIWENCODEEXT]: No such device ioctl[SIOCSIWENCODE]: No such device ioctl[SIOCSIWAUTH]: No such device WEXT auth param 7 value 0x0 - Failed to disable WPA in the driver. ioctl[SIOCSIWAUTH]: No such device WEXT auth param 5 value 0x0 - ioctl[SIOCSIWAUTH]: No such device WEXT auth param 4 value 0x0 - ioctl[SIOCSIWAP]: No such device ioctl[SIOCGIFFLAGS]: No such device # In dmesg we get: <4>[18300.494065] dhd_oob_enable_intr : enable <4>[18305.019976] dhd_net_start failed bus is not ready <4>[18305.020278] dhdsdio_probe: dhd_net_start failed! Do I need to specify the firmware with insmod? Why are we trying to control the interface manually instead of through the Android API? The Android API doesn't support ad-hoc connections as far as I can tell. The card, I am sure, most certainly can.

    Read the article

  • Archlinux/atheros WLAN configuration troubles

    - by GrinReaper
    I'm trying to config archlinux to use my wireless network adapter. It's quite troublesome. From what I've gathered, it's an atheros network adapter, using the ath5k driver/module... I can't get it to work; any ideas? Here's some of the output from my tinkering: # lspci | grep -i net 00:0a.0 Ethernet controller: nVidia corporation MCP67 Ethernet (reva2) 03:00.0 Ethernet controller: atheros communications inc. AR5001 Wireless Network Adapter (rev01) # lsusb ... Bus 004 Device 003: ID 03f0:17d Hewlett Packard Wireless (Bluetooth + WLAN Interface [Integrated Module] # ping -c 3 www.google.com ping: unknown host www.google.com #ping -c 3 8.8.8.8 ping: network is unreachable # lspci -v 03:00.0 Ethernet controller: atheros communications inc. AR5001 Wireless Network Adapter (rev01) ... Kernel driver in use: ath5k Kernel modules: ath5k # dmesg |grep ath5k registered as phy0 registered led device ath5k: atheros chip found PCI INT A disabled registered led device registered as phy1 # ip addr | sed '/^[0-9]/!d;s/: <.*$//' 1: lo 2: eth1 3: eth0 # ip link set <interface> up/down RNETLINK answers: Operation not possible due to RF-kill Also, is there a way to dump text from command-line to a text file so i can just copy pasta? Sorry, first time using a linux distro... EDIT: So I just tried this: I actually just did this twice. (I can't tell which setting is on/off for my wireless adapter. The lights are blue all the time now.) #rfkill list 0: hp-wifi: wireless lan softblocked: no hardblocked :yes 1: hp-bluetooth: bluetooth softblocked: no hardblocked :yes 3: phy1: wireless lan softblocked: no hardblocked :yes #rfkill list 0: hp-wifi: wireless lan softblocked: no hardblocked :no 1: hp-bluetooth: bluetooth softblocked: no hardblocked no 3: phy1: wireless lan softblocked: no hardblocked :yes 7: hci0: bluetooh 0: hp-wifi: wireless lan softblocked: no hardblocked :no I've dug around some other articles and it seems like ath5k is supposed to be preferable to madwifi, so should i be using madwifi? I'm 99% sure I disabled the hardblock (by turning it ON) but, as shown above, phy1 wireless lan is STILL hardblocked. What gives? Maybe I've made some more fundamental error in a basic config file? EDIT: I've fixed the hardblock. I've tried pinging www.google.com, but to no avail. I get: ping: unknown host www.google.com In the arch wiki: Edit /etc/hosts and add the same HOSTNAME you entered in /etc/rc.conf: 127.0.0.1 archlinux.domain.org localhost.localdomain localhost archlinux To my understanding, hostname is just a user-specified and based on preference(?) My /etc/rc.conf: HOSTNAME="gestalt" My /etc/hosts: 127.0.0.1 localhost.localdomain localhost gestalt but should it be the following? 120.0.0.1 localhost.domain.org localhost.localdomain localhost gestalt

    Read the article

  • Is there a way to permanently arrange 2 displays under XP?

    - by rumtscho
    When I am home or on a business trip, or on a meeting, I use my laptop in the usual way. When I get to work, I put it on the docking station and boot it with the lid closed. The image appears on the two displays connected to the docking station. On the left, there is an old monitor connected over VGA, on the right, a big widescreen connected over DVI. Obviously, the videocard seems to think that the DVI is the primary output, and the VGA the secondary one. Thus Windows always displays the widescreen on the left and the old FSC monitor on the right. So when I want to move the mouse pointer from the (physically) left display to the (physically) right display, I have to move it from right to left, which is a usability nightmare. Of course, I can just drag one display over the other one in the display properties, and then everything is as it should be. The catch: Windows remembers this only as long as it has the two displays. Every time it runs on the laptop display, it forgets the setting. Physically switching the monitors isn't an option, for ergonomical reasons. I prefer to run the more important applications on the bigger screen with the better colourspace, and the shape of my desk forces me to sit off-center, so the more important applications should be shown on the right display. Just switching the video ports doesn't help either. When I connect the big monitor over VGA, image quality deteriorates visibly. So what I do now is: every time I bring the laptop to my desk, I boot it. I wait the whole 7 minutes of XP booting, syncing network drives, etc. Then I fire up the display properties, switch to the last tab, drag the widescreen display to the right, and close. Only then can I start working. Does someone have a better idea? The laptop is a Dell Latitude 630 with Windows XP SP 3. It has an nVidia graphics card (not an onboard chip).

    Read the article

  • Microphones not working on Apple macbook Air 1,1 (Early 2008) under Linux

    - by jj_p
    I'm running Linux on an mba. I can't make the microphones (neither external nor internal) work. I test using alsamixer and arecord -d 5 test-mic.waw together with aplay test-mic.waw It seems there is a problem with kernel trying to decipher Apple (intentionally) corrupted 'bios', in particular the mic pins are wrongly assigned. As far as we are concerned here, is there any difference between using EFI and BIOS-compatibility mode? (see https://wiki.archlinux.org/index.php/MacBook where they claim to have everything working out of the box on mba1,1) A nice proposal would be to compile the latest Linux kernel and run hda-jack-retask to find the right configuration (in the case of Realtek codec, the missing things I'm supposed to check are either some vendor-specific COEF verbs, EAPD or GPIO setup.), and then come up with a kernel patch to address the issue. Since I'm not that familiar with this last part of the story, can anyone help me through this process? Some useful data: The output from alsa script run as root http://www.alsa-project.org/db/?f=adae8ebee1007043fe83414ac4972319e02255fa The command hda-jack-sense-test -a (with external mic in) Pin 0x14 (Internal Speaker): present = No Pin 0x15 (Green HP Out): present = Yes Pin 0x16 (Not connected): present = No Pin 0x17 (Not connected): present = No Pin 0x18 (Not connected): present = No Pin 0x19 (Not connected): present = No Pin 0x1a (Not connected): present = No Pin 0x1b (Not connected): present = No Pin 0x1c (Not connected): present = No Pin 0x1d (Not connected): present = No Pin 0x1e (Not connected): present = No Pin 0x1f (Not connected): present = No Most likely the chip is Realtek ALC885 (compare also ALC889A) http://guide-images.ifixit.net/igi/bBTSqaeK5JpQ1AWe.large , although at the moment alsa reads it as ALC889A Takashi Iwai's tutorial https://www.kernel.org/doc/Documentation/sound/alsa/HD-Audio.txt Some people researched the original files from a running OS X installation on this same model (I think the relevant files are AppleHDA.kext/Contents/MacOS/AppleHDA AppleHDA.kext/Contents/PlugIns/AppleHDAHardwareConfigDriver.kext/Contents/Info.p????list AppleHDA.kext/Contents/Resources/layout12.xml.zlib AppleHDA.kext/Contents/Resources/Platforms.xml.zlib) http://www.insanelymac.com/forum/topic/220090-alc889a-pin-configuration/#entry1554954 Datasheet http://www.realtek.info/pdf/ALC885_1-1.pdf (from the same Realtek, one can also try to download Linux driver, but this is just taken from ALSA project, as stated in the readme file.) Compare with this Arch guy http://www.alsa-project.org/db/?f=3ca8243c0626844f0264a3faad0aa72018bc14f4 Here for the first time support to audio (except mics) for mba1,2 (which is morally the same as 1,1) is patched into the kernel http://www.alsa-project.org/pipermail/alsa-devel/2010-February/025511.html The same jack supposedly works both for HP and ext MIC, I think it's called TRRS, and it's the same as the one used e.g. for iphones This guy might have done a similar job, though to a more recent version and for sound globally, not just mics: http://blogs.aerys.in/jeanmarc-leroux/2013/09/15/fixing-2013-macbook-air-ubuntu-sound-issue/ (this is mirror to http://unix.stackexchange.com/questions/73044/microphones-not-working-on-apple-macbook-air-1-1-early-2008-under-linux )

    Read the article

  • GeForce 8800GT not even giving basic output

    - by Sam
    My Dad bought a GeForce 8800GT graphics card quite a long time ago now. It has never worked in his PC. Print out from a dxdiag: System Information Time of this report: 4/13/2010, 19:52:40 Machine name: USER-PC Operating System: Windows Vista™ Home Premium (6.0, Build 6001) Service Pack 1 (6001.vistasp1_gdr.091208-0542) Language: English (Regional Setting: English) System Manufacturer: To Be Filled By O.E.M. System Model: To Be Filled By O.E.M. BIOS: Default System BIOS Processor: Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz (4 CPUs), ~2.3GHz Memory: 2046MB RAM Page File: 1045MB used, 3296MB available Windows Dir: C:\Windows DirectX Version: DirectX 10 DX Setup Parameters: Not found DxDiag Version: 6.00.6001.18000 32bit Unicode DxDiag Notes Display Tab 1: No problems found. Sound Tab 1: No problems found. Sound Tab 2: No problems found. Sound Tab 3: No problems found. Input Tab: No problems found. DirectX Debug Levels Direct3D: 0/4 (retail) DirectDraw: 0/4 (retail) DirectInput: 0/5 (retail) DirectMusic: 0/5 (retail) DirectPlay: 0/9 (retail) DirectSound: 0/5 (retail) DirectShow: 0/6 (retail) Display Devices Card name: ATI Radeon HD 2400 PRO Manufacturer: ATI Technologies Inc. Chip type: ATI Radeon Graphics Processor (0x94C3) DAC type: Internal DAC(400MHz) Device Key: Enum\PCI\VEN_1002&DEV_94C3&SUBSYS_00000000&REV_00 Display Memory: 1012 MB Dedicated Memory: 245 MB Shared Memory: 767 MB Current Mode: 1280 x 960 (32 bit) (75Hz) Monitor: Generic PnP Monitor Driver Name: atiumdag.dll,atiumdva.dat,atitmmxx.dll Driver Version: 7.14.0010.0523 (English) DDI Version: 10 Driver Attributes: Final Retail Driver Date/Size: 8/22/2007 02:43:14, 3021312 bytes That info is from the current card that is installed in it and has been installed since its purchase roughly 3-4 years ago. When I physically install the card I put it into a purple slot on the motherboard that the old card was in (if I go into the device manager and select properties on the current card it confirms that the slot is a "PCI Slot 16 (PCI bus 2, device 0, function 0)") and boot up the computer but get absolutely no output. The screen that we have registers that it is connected to something (by not displaying the screen it does when the cable is unplugged) but just remains blank, no output at all. I recently took the card to my University and one of my friends who is better with hardware issues than I am tried it in his system and it worked perfectly. No issues whatsoever. I do not have a spec list for his system but I could get one if you need it. If you need any more information on this issue I will be happy to supply you with it as I am starting to get very annoyed with this problem ^_^

    Read the article

  • Low FPS in some games, but hardware not fully used

    - by Mario De Schaepmeester
    I just did a little funny experiment in the game/sim "Train Simulator 2013". I normally have good FPS in it (around 30) at full settings. What I did was make a really, really long train so that the calculations the sim needed to make were enormous (the sim is quite realistic, it takes all things into account like speed/acceleration, G-forces, comfort levels, possible wheel slip and many more, and most of those things on each carriage seperately). This resulted in only 14FPS as reported by the game, but it felt more like 8FPS or so. I have a Logitech G15 keyboard which has an LCD, and it allows me to monitor CPU/RAM and video card load on it. The strange thing is, all CPU cores were busy, but the total load was only about 60% maximum at all times. The video card was only on 30% load (possibly an important note, the memory was full, which is however not unusual for the game in question). The RAM had plenty of room and there weren't many operations as it didn't grow or shrink much. I just have the feeling that the game would run smoother if it used more of my hardware power. Why is it not doing so? I had the same in another game, The Elder Scrolls: Morrowind when using more than 100 mods (that all use scripting) and a few high res texture mods, + a full-on graphics improvement program. The engine is very old (2003), and so I thought this might be the cause (not being optimised for multithreading). I had thought of possible causes, like: The operating system doesn't let the games use all the resources. It doesn't make use of multi-threading appropriately. To eliminate the former, I tried a CPU stress tool and that got 100% CPU juice as I let it run, so the OS is not the problem. I gave its thread the "higher" priority though. My actual question In both games, I did things the engine was not really built to do or support. Can those games' framerate be limited cause of their own engine not being able to cope? What is the real reason and more importantly, can I help it? And in any case, could something actually be wrong with my hardware? It's all reasonably new, a couple of months, and I (almost) never experience any other trouble. Modern and much more demanding games work absolutely fine. Specs CPU: AMD Phenom II 965 X4 @ 3.4gHz RAM: 8GB of DDR3 RAM Video: MSI GTX560 (nVidia chip) with 1GB of GDDR5 memory OS: Windows 7 Ultimate 64 bit Nothing overclocked.

    Read the article

  • System locking up with suspicious messages about hard disk

    - by Chris Conway
    My system has started behaving strangely, intermittently locking up. I see messages like the following in syslog: Nov 18 22:22:00 claypool kernel: [ 3428.078156] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 Nov 18 22:22:00 claypool kernel: [ 3428.078163] ata3.00: irq_stat 0x40000000 Nov 18 22:22:00 claypool kernel: [ 3428.078167] sr 2:0:0:0: CDB: Test Unit Ready: 00 00 00 00 00 00 Nov 18 22:22:00 claypool kernel: [ 3428.078182] ata3.00: cmd a0/00:00:00:00:00/00:00:00:00:00/a0 tag 0 Nov 18 22:22:00 claypool kernel: [ 3428.078184] res 50/00:03:00:00:00/00:00:00:00:00/a0 Emask 0x1 (device error) Nov 18 22:22:00 claypool kernel: [ 3428.078188] ata3.00: status: { DRDY } Nov 18 22:22:00 claypool kernel: [ 3428.080887] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 Nov 18 22:22:00 claypool kernel: [ 3428.080890] ata3.00: irq_stat 0x40000000 Nov 18 22:22:00 claypool kernel: [ 3428.080893] sr 2:0:0:0: CDB: Test Unit Ready: 00 00 00 00 00 00 Nov 18 22:22:00 claypool kernel: [ 3428.080905] ata3.00: cmd a0/00:00:00:00:00/00:00:00:00:00/a0 tag 0 Nov 18 22:22:00 claypool kernel: [ 3428.080906] res 50/00:03:00:00:00/00:00:00:00:00/a0 Emask 0x1 (device error) Nov 18 22:22:00 claypool kernel: [ 3428.080910] ata3.00: status: { DRDY } And then this: Nov 18 23:13:56 claypool kernel: [ 6544.000798] ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Nov 18 23:13:56 claypool kernel: [ 6544.000804] ata1.00: failed command: FLUSH CACHE EXT Nov 18 23:13:56 claypool kernel: [ 6544.000814] ata1.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0 Nov 18 23:13:56 claypool kernel: [ 6544.000815] res 40/00:00:00:4f:c2/00:00:00:00:00/40 Emask 0x4 (timeout) Nov 18 23:13:56 claypool kernel: [ 6544.000819] ata1.00: status: { DRDY } Nov 18 23:13:56 claypool kernel: [ 6544.000825] ata1: hard resetting link Nov 18 23:14:01 claypool kernel: [ 6549.360324] ata1: link is slow to respond, please be patient (ready=0) Nov 18 23:14:06 claypool kernel: [ 6554.008091] ata1: COMRESET failed (errno=-16) Nov 18 23:14:06 claypool kernel: [ 6554.008103] ata1: hard resetting link Nov 18 23:14:11 claypool kernel: [ 6559.372246] ata1: link is slow to respond, please be patient (ready=0) Nov 18 23:14:16 claypool kernel: [ 6564.020228] ata1: COMRESET failed (errno=-16) Nov 18 23:14:16 claypool kernel: [ 6564.020235] ata1: hard resetting link Nov 18 23:14:21 claypool kernel: [ 6569.380109] ata1: link is slow to respond, please be patient (ready=0) Nov 18 23:14:31 claypool kernel: [ 6579.460243] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Nov 18 23:14:31 claypool kernel: [ 6579.486595] ata1.00: configured for UDMA/133 Nov 18 23:14:31 claypool kernel: [ 6579.486601] ata1.00: retrying FLUSH 0xea Emask 0x4 Nov 18 23:14:31 claypool kernel: [ 6579.486939] ata1.00: device reported invalid CHS sector 0 Nov 18 23:14:31 claypool kernel: [ 6579.486952] ata1: EH complete Nov 18 23:17:01 claypool CRON[3910]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Nov 18 23:17:01 claypool CRON[3908]: (CRON) error (grandchild #3910 failed with exit status 1) Nov 18 23:17:01 claypool postfix/sendmail[3925]: fatal: open /etc/postfix/main.cf: No such file or directory Nov 18 23:17:01 claypool CRON[3908]: (root) MAIL (mailed 1 byte of output; but got status 0x004b, #012) Nov 18 23:39:01 claypool CRON[4200]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -type f -cmin +$(/usr/lib/php5/maxlifetime) -print0 | xargs -n 200 -r -0 rm) There are no messages marked after 23:39. When I next tried to use the machine, it would not return from the screensaver (blank screen), nor switch to another terminal, and I had to hard reboot it. [UPDATE] The output of smartctl is here. I had trouble getting this, because / is being mounted read-only (?!), which prevents most applications from running. Also, it may not be related, but I have the following worrying messages in dmesg: [ 10.084596] k8temp 0000:00:18.3: Temperature readouts might be wrong - check erratum #141 [ 10.098477] i2c i2c-0: nForce2 SMBus adapter at 0x600 [ 10.098483] ACPI: resource nForce2_smbus [io 0x0700-0x073f] conflicts with ACPI region SM00 [??? 0x00000700-0x0000073f flags 0x30] [ 10.098486] ACPI: This conflict may cause random problems and system instability [ 10.098487] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver [ 10.098509] i2c i2c-1: nForce2 SMBus adapter at 0x700 [ 10.112570] Linux agpgart interface v0.103 [ 10.155329] atk: Resources not safely usable due to acpi_enforce_resources kernel parameter [ 10.161506] it87: Found IT8712F chip at 0x290, revision 8 [ 10.161517] it87: VID is disabled (pins used for GPIO) [ 10.161527] it87: in3 is VCC (+5V) [ 10.161528] it87: in7 is VCCH (+5V Stand-By) [ 10.161560] ACPI: resource it87 [io 0x0295-0x0296] conflicts with ACPI region ECRE [??? 0x00000290-0x000002af flags 0x45] [ 10.161562] ACPI: This conflict may cause random problems and system instability [ 10.161564] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver [UPDATE 2] I swapped in a new SATA cable, per Phil's suggestion. The current output of smartctl is here, if it helps. [UPDATE 3] I don't think the cable fixed it. The system hasn't locked up yet, but my media player crashed a few minutes ago and I have the following in the syslog: Nov 20 16:07:17 claypool kernel: [ 2294.400033] ata1: link is slow to respond, please be patient (ready=0) Nov 20 16:07:47 claypool kernel: [ 2324.084581] ata1: COMRESET failed (errno=-16) Nov 20 16:07:47 claypool kernel: [ 2324.084588] ata1: limiting SATA link speed to 1.5 Gbps Nov 20 16:07:47 claypool kernel: [ 2324.084592] ata1: hard resetting link I get the following response from smartctl: $ sudo smartctl -a /dev/sda [sudo] password for chris: sudo: Can't open /var/lib/sudo/chris/0: Read-only file system smartctl 5.40 2010-03-16 r3077 [i686-pc-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Device: /0:0:0:0 Version: scsiModePageOffset: response length too short, resp_len=47 offset=50 bd_len=46 >> Terminate command early due to bad response to IEC mode page A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options.

    Read the article

  • Implementing an Interceptor Using NHibernate’s Built In Dynamic Proxy Generator

    - by Ricardo Peres
    NHibernate 3.2 came with an included proxy generator, which means there is no longer the need – or the possibility, for that matter – to choose Castle DynamicProxy, LinFu or Spring. This is actually a good thing, because it means one less assembly to deploy. Apparently, this generator was based, at least partially, on LinFu. As there are not many tutorials out there demonstrating it’s usage, here’s one, for demonstrating one of the most requested features: implementing INotifyPropertyChanged. This interceptor, of course, will still feature all of NHibernate’s functionalities that you are used to, such as lazy loading, and such. We will start by implementing an NHibernate interceptor, by inheriting from the base class NHibernate.EmptyInterceptor. This class does not do anything by itself, but it allows us to plug in behavior by overriding some of its methods, in this case, Instantiate: 1: public class NotifyPropertyChangedInterceptor : EmptyInterceptor 2: { 3: private ISession session = null; 4:  5: private static readonly ProxyFactory factory = new ProxyFactory(); 6:  7: public override void SetSession(ISession session) 8: { 9: this.session = session; 10: base.SetSession(session); 11: } 12:  13: public override Object Instantiate(String clazz, EntityMode entityMode, Object id) 14: { 15: Type entityType = Type.GetType(clazz); 16: IProxy proxy = factory.CreateProxy(entityType, new _NotifyPropertyChangedInterceptor(), typeof(INotifyPropertyChanged)) as IProxy; 17: 18: _NotifyPropertyChangedInterceptor interceptor = proxy.Interceptor as _NotifyPropertyChangedInterceptor; 19: interceptor.Proxy = this.session.SessionFactory.GetClassMetadata(entityType).Instantiate(id, entityMode); 20:  21: this.session.SessionFactory.GetClassMetadata(entityType).SetIdentifier(proxy, id, entityMode); 22:  23: return (proxy); 24: } 25: } Then we need a class that implements the NHibernate dynamic proxy behavior, let’s place it inside our interceptor, because it will only need to be used there: 1: class _NotifyPropertyChangedInterceptor : NHibernate.Proxy.DynamicProxy.IInterceptor 2: { 3: private PropertyChangedEventHandler changed = delegate { }; 4:  5: public Object Proxy 6: { 7: get; 8: set;} 9:  10: #region IInterceptor Members 11:  12: public Object Intercept(InvocationInfo info) 13: { 14: Boolean isSetter = info.TargetMethod.Name.StartsWith("set_") == true; 15: Object result = null; 16:  17: if (info.TargetMethod.Name == "add_PropertyChanged") 18: { 19: PropertyChangedEventHandler propertyChangedEventHandler = info.Arguments[0] as PropertyChangedEventHandler; 20: this.changed += propertyChangedEventHandler; 21: } 22: else if (info.TargetMethod.Name == "remove_PropertyChanged") 23: { 24: PropertyChangedEventHandler propertyChangedEventHandler = info.Arguments[0] as PropertyChangedEventHandler; 25: this.changed -= propertyChangedEventHandler; 26: } 27: else 28: { 29: result = info.TargetMethod.Invoke(this.Proxy, info.Arguments); 30: } 31:  32: if (isSetter == true) 33: { 34: String propertyName = info.TargetMethod.Name.Substring("set_".Length); 35: this.changed(this.Proxy, new PropertyChangedEventArgs(propertyName)); 36: } 37:  38: return (result); 39: } 40:  41: #endregion 42: } What this does for every interceptable method (those who are either virtual or from the INotifyPropertyChanged) is: For methods that came from the INotifyPropertyChanged interface, add_PropertyChanged and remove_PropertyChanged (yes, events are methods ), we add an implementation that adds or removes the event handlers to the delegate which we declared as changed; For all the others, we direct them to the place where they are actually implemented, which is the Proxy field; If the call is setting a property, it fires afterwards the PropertyChanged event. In order to use this, we need to add the interceptor to the Configuration before building the ISessionFactory: 1: using (ISessionFactory factory = cfg.SetInterceptor(new NotifyPropertyChangedInterceptor()).BuildSessionFactory()) 2: { 3: using (ISession session = factory.OpenSession()) 4: using (ITransaction tx = session.BeginTransaction()) 5: { 6: Customer customer = session.Get<Customer>(100); //some id 7: INotifyPropertyChanged inpc = customer as INotifyPropertyChanged; 8: inpc.PropertyChanged += delegate(Object sender, PropertyChangedEventArgs e) 9: { 10: //fired when a property changes 11: }; 12: customer.Address = "some other address"; //will raise PropertyChanged 13: customer.RecentOrders.ToList(); //will trigger the lazy loading 14: } 15: } Any problems, questions, do drop me a line!

    Read the article

  • Talend Enterprise Data Integration overperforms on Oracle SPARC T4

    - by Amir Javanshir
    The SPARC T microprocessor, released in 2005 by Sun Microsystems, and now continued at Oracle, has a good track record in parallel execution and multi-threaded performance. However it was less suited for pure single-threaded workloads. The new SPARC T4 processor is now filling that gap by offering a 5x better single-thread performance over previous generations. Following our long-term relationship with Talend, a fast growing ISV positioned by Gartner in the “Visionaries” quadrant of the “Magic Quadrant for Data Integration Tools”, we decided to test some of their integration components with the T4 chip, more precisely on a T4-1 system, in order to verify first hand if this new processor stands up to its promises. Several tests were performed, mainly focused on: Single-thread performance of the new SPARC T4 processor compared to an older SPARC T2+ processor Overall throughput of the SPARC T4-1 server using multiple threads The tests consisted in reading large amounts of data --ten's of gigabytes--, processing and writing them back to a file or an Oracle 11gR2 database table. They are CPU, memory and IO bound tests. Given the main focus of this project --CPU performance--, bottlenecks were removed as much as possible on the memory and IO sub-systems. When possible, the data to process was put into the ZFS filesystem cache, for instance. Also, two external storage devices were directly attached to the servers under test, each one divided in two ZFS pools for read and write operations. Multi-thread: Testing throughput on the Oracle T4-1 The tests were performed with different number of simultaneous threads (1, 2, 4, 8, 12, 16, 32, 48 and 64) and using different storage devices: Flash, Fibre Channel storage, two stripped internal disks and one single internal disk. All storage devices used ZFS as filesystem and volume management. Each thread read a dedicated 1GB-large file containing 12.5M lines with the following structure: customerID;FirstName;LastName;StreetAddress;City;State;Zip;Cust_Status;Since_DT;Status_DT 1;Ronald;Reagan;South Highway;Santa Fe;Montana;98756;A;04-06-2006;09-08-2008 2;Theodore;Roosevelt;Timberlane Drive;Columbus;Louisiana;75677;A;10-05-2009;27-05-2008 3;Andrew;Madison;S Rustle St;Santa Fe;Arkansas;75677;A;29-04-2005;09-02-2008 4;Dwight;Adams;South Roosevelt Drive;Baton Rouge;Vermont;75677;A;15-02-2004;26-01-2007 […] The following graphs present the results of our tests: Unsurprisingly up to 16 threads, all files fit in the ZFS cache a.k.a L2ARC : once the cache is hot there is no performance difference depending on the underlying storage. From 16 threads upwards however, it is clear that IO becomes a bottleneck, having a good IO subsystem is thus key. Single-disk performance collapses whereas the Sun F5100 and ST6180 arrays allow the T4-1 to scale quite seamlessly. From 32 to 64 threads, the performance is almost constant with just a slow decline. For the database load tests, only the best IO configuration --using external storage devices-- were used, hosting the Oracle table spaces and redo log files. Using the Sun Storage F5100 array allows the T4-1 server to scale up to 48 parallel JVM processes before saturating the CPU. The final result is a staggering 646K lines per second insertion in an Oracle table using 48 parallel threads. Single-thread: Testing the single thread performance Seven different tests were performed on both servers. Given the fact that only one thread, thus one file was read, no IO bottleneck was involved, all data being served from the ZFS cache. Read File ? Filter ? Write File: Read file, filter data, write the filtered data in a new file. The filter is set on the “Status” column: only lines with status set to “A” are selected. This limits each output file to about 500 MB. Read File ? Load Database Table: Read file, insert into a single Oracle table. Average: Read file, compute the average of a numeric column, write the result in a new file. Division & Square Root: Read file, perform a division and square root on a numeric column, write the result data in a new file. Oracle DB Dump: Dump the content of an Oracle table (12.5M rows) into a CSV file. Transform: Read file, transform, write the result data in a new file. The transformations applied are: set the address column to upper case and add an extra column at the end, which is the concatenation of two columns. Sort: Read file, sort a numeric and alpha numeric column, write the result data in a new file. The following table and graph present the final results of the tests: Throughput unit is thousand lines per second processed (K lines/second). Improvement is the % of improvement between the T5140 and T4-1. Test T4-1 (Time s.) T5140 (Time s.) Improvement T4-1 (Throughput) T5140 (Throughput) Read/Filter/Write 125 806 645% 100 16 Read/Load Database 195 1111 570% 64 11 Average 96 557 580% 130 22 Division & Square Root 161 1054 655% 78 12 Oracle DB Dump 164 945 576% 76 13 Transform 159 1124 707% 79 11 Sort 251 1336 532% 50 9 The improvement of single-thread performance is quite dramatic: depending on the tests, the T4 is between 5.4 to 7 times faster than the T2+. It seems clear that the SPARC T4 processor has gone a long way filling the gap in single-thread performance, without sacrifying the multi-threaded capability as it still shows a very impressive scaling on heavy-duty multi-threaded jobs. Finally, as always at Oracle ISV Engineering, we are happy to help our ISV partners test their own applications on our platforms, so don't hesitate to contact us and let's see what the SPARC T4-based systems can do for your application! "As describe in this benchmark, Talend Enterprise Data Integration has overperformed on T4. I was generally happy to see that the T4 gave scaling opportunities for many scenarios like complex aggregations. Row by row insertion in Oracle DB is faster with more than 650,000 rows per seconds without using any bulk Oracle capabilities !" Cedric Carbone, Talend CTO.

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32  | Next Page >