Search Results

Search found 501 results on 21 pages for 'sequential'.

Page 10/21 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Linux RAID0 - relocating member disk

    - by qdot
    I've got an issue I would rather handle with the array online - I am using RAID0 for temporary video storage - data that is low-cost to restore, but that is used frequently. The software array looks like this: md1 : active raid0 sdb1[2] sdc1[3] sdd1[0] sde1[1] 1953487616 blocks 64k chunks I have another partition (sda1) in this system, that I want to use to replace sdc1 (The drives are of varying age, and sdc1 is definitely the slowest one, limiting the entire array's sequential read performance to only 300MB/s). Is there a way to migrate the data from sdc1 to sda1 while the array is still online?

    Read the article

  • ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND

    - by Telanor
    I've stared at this for at least half an hour now and I cannot figure out what directx is complaining about. I know this error normally means you put float3 instead of a float4 or something like that, but I've checked over and over and as far as I can tell, everything matches. This is the full error message: D3D11: ERROR: ID3D11DeviceContext::DrawIndexed: Input Assembler - Vertex Shader linkage error: Signatures between stages are incompatible. The input stage requires Semantic/Index (COLOR,0) as input, but it is not provided by the output stage. [ EXECUTION ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND ] This is the vertex shader's input signature as seen in PIX: // Input signature: // // Name Index Mask Register SysValue Format Used // -------------------- ----- ------ -------- -------- ------ ------ // POSITION 0 xyz 0 NONE float xyz // NORMAL 0 xyz 1 NONE float // COLOR 0 xyzw 2 NONE float The HLSL structure looks like this: struct VertexShaderInput { float3 Position : POSITION0; float3 Normal : NORMAL0; float4 Color: COLOR0; }; The input layout, from PIX, is: The C# structure holding the data looks like this: [StructLayout(LayoutKind.Sequential)] public struct PositionColored { public static int SizeInBytes = Marshal.SizeOf(typeof(PositionColored)); public static InputElement[] InputElements = new[] { new InputElement("POSITION", 0, Format.R32G32B32_Float, 0), new InputElement("NORMAL", 0, Format.R32G32B32_Float, 0), new InputElement("COLOR", 0, Format.R32G32B32A32_Float, 0) }; Vector3 position; Vector3 normal; Vector4 color; #region Properties ... #endregion public PositionColored(Vector3 position, Vector3 normal, Vector4 color) { this.position = position; this.normal = normal; this.color = color; } public override string ToString() { StringBuilder sb = new StringBuilder(base.ToString()); sb.Append(" Position="); sb.Append(position); sb.Append(" Color="); sb.Append(Color); return sb.ToString(); } } SizeInBytes comes out to 40, which is correct (4*3 + 4*3 + 4*4 = 40). Can anyone find where the mistake is?

    Read the article

  • Strange ports on default install of W7

    - by Sabre
    I have a base new install of windows 7, and when I went to look for something else I saw the attached netstat output. What concerns me is that this is Windows + Truecrypt + drivers, nothing else installed. The sequential high ranged ports belonging to several different seemingly not out of place services seemed odd. So I torched the install, used Active@ to scrub the disk, re-downloaded the ISO from MSDN, and did a fresh reinstall, viola, they are there again. It just seems out of place, I have seen a many netstats over the years, this one just strikes me as odd, so I started thinking rootkit? (JUst FYI, when I reloaded I named the machine "Error" so that is why the task manager reads the computer name as such.) So I would like to know if anyone else could explain it, and therefore is may be normal, or would they be worried as well, and should I start considering I have some very strange thing occuring on my network?

    Read the article

  • Identify malicious subnet

    - by Macros
    I have been experiencing performance issues on a website for a while, and it always seems to hit around the same time. Having analysed the logs I've found a big spike in requests which corresponds with this slowdown, with all requests coming from the same subnet. It feels to me like an attempt to scrape the site (it is a car hire site and the requests are sequential for each IP and with incremental search criteria) and I would like to identify the source. The Subnet in question is 209.67.89.x which I can see is owned by Savvis however I can't reverse DNS any of the IPs - is there any other way I can gain more info on this (other than contacting them direct - I am also doing this)?

    Read the article

  • Oracle regains the #1 UNIX Shipments Marketshare

    - by EricReid-Oracle
    Oracle has regained the #1 UNIX Server Shipments spot! According to IDC, Oracle’s share was 33.6%, up from 32.7% in the year ago period, and 32.2% in C4Q13:  IDC: WW Unix Unit Shipments, Share, Growth 2013Q1 Share 2013Q4 Share 2014Q1 Share Sequential Growth Y/Y Growth Oracle 10,141 32.7% 10,294 32.2% 8,355 33.6% -18.8% -17.6% IBM 10,203 32.9% 11,533 36.0% 6,919 27.8% -40.0% -32.2% HP 7,046 22.7% 6,786 21.2% 6,549 26.4% -3.5% -7.1% Fujitsu 1,174 3.8% 1,141 3.6% 1,069 4.3% -6.3% -8.9% Dell 565 1.8% 499 1.6% 519 2.1% 4.0% -8.2% NEC 69 0.2% 81 0.3% 63 0.3% -22.5% -9.4% Others 1,804 5.8% 1,684 5.3% 1,380 5.6% -18.1% -23.5% Total Market 31,002 100.0% 32,018 100.0% 24,854 100.0% -22.4% -19.8% While the UNIX server space is currently undergoing some contraction (on a pure numbers basis), this can be traced in part to an overall consolidation trend, due to the greatly-increased price-performance of our systems. Consider this: one SPARC T5-4 system has 1/16th the number of sockets and 1/192nd the number of cores of the previous high-end M9000-64 system -- all at 5X the price-performance. SPARC. Solaris. Nuff' said.

    Read the article

  • Can I use applescript to click buttons in background?

    - by Giorgio
    Sorry for this generic and probably bad-written question. I've never programmed in applescript, but I'm quite familiar with other coding language. I'm in the need of clicking on 2 sequential button inside the lobby of a software (when you click the first a popup appears and we should click 'ok'). However things are a little bit more complicated then this because: 1) the lobby of this program isn't in foreground: it's covered by other windows opened. (I don't have experience so I don't know if this represent a problem). 2) there should be a timer and the program should click this button at regular intervals. Is this feasible with applescript?

    Read the article

  • python: what are efficient techniques to deal with deeply nested data in a flexible manner?

    - by AlexandreS
    My question is not about a specific code snippet but more general, so please bear with me: How should I organize the data I'm analyzing, and which tools should I use to manage it? I'm using python and numpy to analyse data. Because the python documentation indicates that dictionaries are very optimized in python, and also due to the fact that the data itself is very structured, I stored it in a deeply nested dictionary. Here is a skeleton of the dictionary: the position in the hierarchy defines the nature of the element, and each new line defines the contents of a key in the precedent level: [AS091209M02] [AS091209M01] [AS090901M06] ... [100113] [100211] [100128] [100121] [R16] [R17] [R03] [R15] [R05] [R04] [R07] ... [1263399103] ... [ImageSize] [FilePath] [Trials] [Depth] [Frames] [Responses] ... [N01] [N04] ... [Sequential] [Randomized] [Ch1] [Ch2] Edit: To explain a bit better my data set: [individual] ex: [AS091209M02] [imaging session (date string)] ex: [100113] [Region imaged] ex: [R16] [timestamp of file] ex [1263399103] [properties of file] ex: [Responses] [regions of interest in image ] ex [N01] [format of data] ex [Sequential] [channel of acquisition: this key indexes an array of values] ex [Ch1] The type of operations I perform is for instance to compute properties of the arrays (listed under Ch1, Ch2), pick up arrays to make a new collection, for instance analyze responses of N01 from region 16 (R16) of a given individual at different time points, etc. This structure works well for me and is very fast, as promised. I can analyze the full data set pretty quickly (and the dictionary is far too small to fill up my computer's ram : half a gig). My problem comes from the cumbersome manner in which I need to program the operations of the dictionary. I often have stretches of code that go like this: for mk in dic.keys(): for rgk in dic[mk].keys(): for nk in dic[mk][rgk].keys(): for ik in dic[mk][rgk][nk].keys(): for ek in dic[mk][rgk][nk][ik].keys(): #do something which is ugly, cumbersome, non reusable, and brittle (need to recode it for any variant of the dictionary). I tried using recursive functions, but apart from the simplest applications, I ran into some very nasty bugs and bizarre behaviors that caused a big waste of time (it does not help that I don't manage to debug with pdb in ipython when I'm dealing with deeply nested recursive functions). In the end the only recursive function I use regularly is the following: def dicExplorer(dic, depth = -1, stp = 0): '''prints the hierarchy of a dictionary. if depth not specified, will explore all the dictionary ''' if depth - stp == 0: return try : list_keys = dic.keys() except AttributeError: return stp += 1 for key in list_keys: else: print '+%s> [\'%s\']' %(stp * '---', key) dicExplorer(dic[key], depth, stp) I know I'm doing this wrong, because my code is long, noodly and non-reusable. I need to either use better techniques to flexibly manipulate the dictionaries, or to put the data in some database format (sqlite?). My problem is that since I'm (badly) self-taught in regards to programming, I lack practical experience and background knowledge to appreciate the options available. I'm ready to learn new tools (SQL, object oriented programming), whatever it takes to get the job done, but I am reluctant to invest my time and efforts into something that will be a dead end for my needs. So what are your suggestions to tackle this issue, and be able to code my tools in a more brief, flexible and re-usable manner?

    Read the article

  • dynamically include zipfilesets into a WAR

    - by Konstantin
    hi all - a bit of clumsy situation but for the moment we cannot migrate to more straight-forward project layout. We have a project called myServices and it has 2 source folders (yes, I know, but that's the way it is for now) - I'm trying to make build process a bit more flexible so we now have a property called artifact.names that will be parsed by generic build.xml and based on name, it will call either jar or war task, eg. myService-war will create a WAR file with the following zipfilesets included there: myService-war-classes, myService-war-web-inf, myService-war-meta-inf. I want to add a bit more flexibility, and allow having additional zipfilesets, eg. myService-war-etc-1,2 etc - so these will be picked up by the package target automatically. I cannot use "if" inside war target, and also ${ant.refid:myService-war-classes} property is not resolved, so I'm kind of stuck at the moment with my options - how do I dynamically include a zipfileset into a WAR? You can refer to fileset by id, but it MUST be defined then, eg. you can't have it optional on project level. Thank you. Some build.xml snippets: <target name="archive"> <for list="${artifact.names}" param="artifact"> <sequential> <echo>Packaging artifact @{artifact} of project ${project.name}</echo> <property name="display.@{artifact}.packaging" refid="@{artifact}.packaging" /> <echo>${display.@{artifact}.packaging}</echo> <propertyregex property="@{artifact}.archive.type" input="@{artifact}" regexp="([a-zA-Z0-9]*)(\-)([ejw]ar)" select="\3" casesensitive="false"/> <propertyregex property="@{artifact}.base.name" input="@{artifact}" regexp="([a-zA-Z0-9]*)(\-)([ejw]ar)" select="\1" casesensitive="false"/> <echo>${@{artifact}.archive.type}</echo> <if> <then> <war destfile="${jar.dir}/${@{artifact}.base.name}.war" compress="true" needxmlfile="false"> <resources refid="@{artifact}.packaging" /> <zipfileset refid="@{artifact}-classes" erroronmissingdir="false" /> <zipfileset refid="@{artifact}-meta-inf" erroronmissingdir="false" /> <zipfileset refid="@{artifact}-web-inf" erroronmissingdir="false" /> <!-- Additional zipfilesets to package --> <zipfileset refid="@{artifact}-etc-2" erroronmissingdir="false" /> <zipfileset refid="@{artifact}-etc-3" erroronmissingdir="false" /> <zipfileset refid="@{artifact}-etc-4" erroronmissingdir="false" /> <zipfileset refid="@{artifact}-etc-5" erroronmissingdir="false" /> <zipfileset refid="@{artifact}-etc-6" erroronmissingdir="false" /> <zipfileset refid="@{artifact}-etc-7" erroronmissingdir="false" /> <zipfileset refid="@{artifact}-etc-8" erroronmissingdir="false" /> <zipfileset refid="@{artifact}-etc-9" erroronmissingdir="false" /> <zipfileset refid="@{artifact}-etc-10" erroronmissingdir="false" /> </war> </then> <!-- Generic JAR packaging --> <else> <jar destfile="${jar.dir}/@{artifact}.jar" compress="true"> <resources refid="@{artifact}.packaging" /> <zipfileset refid="@{artifact}-meta-inf" erroronmissingdir="false" /> </jar> </else> </if> </sequential> </for> </target>

    Read the article

  • Passing a list of files to javac

    - by Robert Menteer
    How can I get the javac task to use an existing fileset? In my build.xml I have created several filesets to be used in multiple places throughout build file. Here is how they have been defined: <fileset dir = "${src}" id = "java.source.all"> <include name = "**/*.java" /> </fileset> <fileset dir = "${src}" id = "java.source.examples"> <include name = "**/Examples/**/*.java" /> </fileset> <fileset dir = "${src}" id = "java.source.tests"> <include name = "**/Tests/*.java" /> </fileset> <fileset dir = "${src}" id = "java.source.project"> <include name = "**/*.java" /> <exclude name = "**/Examples/**/*.java" /> <exclude name = "**/Tests/**/*.java" /> </fileset> I have also used macrodef to compile the java files so the javac task does not need to be repeated multiple times. The macro looks like this: <macrodef name="compile"> <attribute name="sourceref"/> <sequential> <javac srcdir = "${src}" destdir = "${build}" classpathref = "classpath" includeantruntime = "no" debug = "${debug}"> <filelist dir="." files="@{sourceref}" /> <-- email is about this </javac> </sequential> What I'm trying to do is compile only the classes that are needed for specific targets not all the targets in the source tree. And do so without having to specify the files every time. Here are how the targets are defined: <target name = "compile-examples" depends = "init"> <compile sourceref = "${toString:java.source.examples}" /> </target> <target name = "compile-project" depends = "init"> <compile sourceref = "${toString:java.source.project}" /> </target> <target name = "compile-tests" depends = "init"> <compile sourceref = "${toString:java.source.tests}" /> </target> As you can see each target specifies the java files to be compiled as a simi-colon separated list of absolute file names. The only problem with this is that javac does not support filelist. It also does not support fileset, path or pathset. I've tried using but it treats the list as a single file name. Another thing I tried is sending the reference directly (not using toString) and using but include does not have a ref attribute. SO THE QUESTION IS: How do you get the javac task to use a reference to a fileset that was defined in another part of the build file? I'm not interested in solutions that cause me to have multiple javac tasks. Completely re-writting the macro is acceptable. Changes to the targets are also acceptable provided redundant code between targets is kept to a minimum. p.s. Another problem is that fileset wants a comma separated list. I've only done a brief search for a way to convert semi-colons to commas and haven't found a way to do that. p.p.s. Sorry for the yelling but some people are too quick to post responses that don't address the subject.

    Read the article

  • CLSF & CLK 2013 Trip Report by Jeff Liu

    - by jamesmorris
    This is a contributed post from Jeff Liu, lead XFS developer for the Oracle mainline Linux kernel team. Recently, I attended both the China Linux Storage and Filesystem workshop (CLSF), and the China Linux Kernel conference (CLK), which were held in Shanghai. Here are the highlights for both events. CLSF - 17th October XFS update (led by Jeff Liu) XFS keeps rapid progress with a lot of changes, especially focused on the infrastructure/performance improvements as well as  new feature development.  This can be reflected with a sample statistics among XFS/Ext4+JBD2/Btrfs via: # git diff --stat --minimal -C -M v3.7..v3.12-rc4 -- fs/xfs|fs/ext4+fs/jbd2|fs/btrfs XFS: 141 files changed, 27598 insertions(+), 19113 deletions(-) Ext4+JBD2: 39 files changed, 10487 insertions(+), 5454 deletions(-) Btrfs: 70 files changed, 19875 insertions(+), 8130 deletions(-) What made up those changes in XFS? Self-describing metadata(CRC32c). This is a new feature and it contributed about 70% code changes, it can be enabled via `mkfs.xfs -m crc=1 /dev/xxx` for v5 superblock. Transaction log space reservation improvements. With this change, we can calculate the log space reservation at mount time rather than runtime to reduce the the CPU overhead. User namespace support. So both XFS and USERNS can be enabled on kernel configuration begin from Linux 3.10. Thanks Dwight Engen's efforts for this thing. Split project/group quota inodes. Originally, project quota can not be enabled with group quota at the same time because they were share the same quota file inode, now it works but only for v5 super block. i.e, CRC enabled. CONFIG_XFS_WARN, an new lightweight runtime debugger which can be deployed in production environment. Readahead log object recovery, this change can speed up the log replay progress significantly. Speculative preallocation inode tracking, clearing and throttling. The main purpose is to deal with inodes with post-EOF space due to speculative preallocation, support improved quota management to free up a significant amount of unwritten space when at or near EDQUOT. It support backgroup scanning which occurs on a longish interval(5 mins by default, tunable), and on-demand scanning/trimming via ioctl(2). Bitter arguments ensued from this session, especially for the comparison between Ext4 and Btrfs in different areas, I have to spent a whole morning of the 1st day answering those questions. We basically agreed on XFS is the best choice in Linux nowadays because: Stable, XFS has a good record in stability in the past 10 years. Fengguang Wu who lead the 0-day kernel test project also said that he has observed less error than other filesystems in the past 1+ years, I own it to the XFS upstream code reviewer, they always performing serious code review as well as testing. Good performance for large/small files, XFS does not works very well for small files has already been an old story for years. Best choice (maybe) for distributed PB filesystems. e.g, Ceph recommends delopy OSD daemon on XFS because Ext4 has limited xattr size. Best choice for large storage (>16TB). Ext4 does not support a single file more than around 15.95TB. Scalability, any objection to XFS is best in this point? :) XFS is better to deal with transaction concurrency than Ext4, why? The maximum size of the log in XFS is 2038MB compare to 128MB in Ext4. Misc. Ext4 is widely used and it has been proved fast/stable in various loads and scenarios, XFS just need more customers, and Btrfs is still on the road to be a manhood. Ceph Introduction (Led by Li Wang) This a hot topic.  Li gave us a nice introduction about the design as well as their current works. Actually, Ceph client has been included in Linux kernel since 2.6.34 and supported by Openstack since Folsom but it seems that it has not yet been widely deployment in production environment. Their major work is focus on the inline data support to separate the metadata and data storage, reduce the file access time, i.e, a file access need communication twice, fetch the metadata from MDS and then get data from OSD, and also, the small file access is limited by the network latency. The solution is, for the small files they would like to store the data at metadata so that when accessing a small file, the metadata server can push both metadata and data to the client at the same time. In this way, they can reduce the overhead of calculating the data offset and save the communication to OSD. For this feature, they have only run some small scale testing but really saw noticeable improvements. Test environment: Intel 2 CPU 12 Core, 64GB RAM, Ubuntu 12.04, Ceph 0.56.6 with 200GB SATA disk, 15 OSD, 1 MDS, 1 MON. The sequence read performance for 1K size files improved about 50%. I have asked Li and Zheng Yan (the core developer of Ceph, who also worked on Btrfs) whether Ceph is really stable and can be deployed at production environment for large scale PB level storage, but they can not give a positive answer, looks Ceph even does not spread over Dreamhost (subject to confirmation). From Li, they only deployed Ceph for a small scale storage(32 nodes) although they'd like to try 6000 nodes in the future. Improve Linux swap for Flash storage (led by Shaohua Li) Because of high density, low power and low price, flash storage (SSD) is a good candidate to partially replace DRAM. A quick answer for this is using SSD as swap. But Linux swap is designed for slow hard disk storage, so there are a lot of challenges to efficiently use SSD for swap. SWAPOUT swap_map scan swap_map is the in-memory data structure to track swap disk usage, but it is a slow linear scan. It will become a bottleneck while finding many adjacent pages in the use of SSD. Shaohua Li have changed it to a cluster(128K) list, resulting in O(1) algorithm. However, this apporoach needs restrictive cluster alignment and only enabled for SSD. IO pattern In most cases, the swap io is in interleaved pattern because of mutiple reclaimers or a free cluster is shared by all reclaimers. Even though block layer can merge interleaved IO to some extent, but we cannot count on it completely. Hence the per-cpu cluster is added base on the previous change, it can help reclaimer do sequential IO and the block layer will be easier to merge IO. TLB flush: If we're reclaiming one active page, we should first move the page from active lru list to inactive lru list, and then reclaim the page from inactive lru to swap it out. During the process, we need to clear PTE twice: first is 'A'(ACCESS) bit, second is 'P'(PRESENT) bit. Processors need to send lots of ipi which make the TLB flush really expensive. Some works have been done to improve this, including rework smp_call_functiom_many() or remove the first TLB flush in x86, but there still have some arguments here and only parts of works have been pushed to mainline. SWAPIN: Page fault does iodepth=1 sync io, but it's a little waste if only issue a page size's IO. The obvious solution is doing swap readahead. But the current in-kernel swap readahead is arbitary(always 8 pages), and it always doesn't perform well for both random and sequential access workload. Shaohua introduced a new flag for madvise(MADV_WILLNEED) to do swap prefetch, so the changes happen in userspace API and leave the in-kernel readahead unchanged(but I think some improvement can also be done here). SWAP discard As we know, discard is important for SSD write throughout, but the current swap discard implementation is synchronous. He changed it to async discard which allow discard and write run in the same time. Meanwhile, the unit of discard is also optimized to cluster. Misc: lock contention For many concurrent swapout and swapin , the lock contention such as anon_vma or swap_lock is high, so he changed the swap_lock to a per-swap lock. But there still have some lock contention in very high speed SSD because of swapcache address_space lock. Zproject (led by Bob Liu) Bob gave us a very nice introduction about the current memory compression status. Now there are 3 projects(zswap/zram/zcache) which all aim at smooth swap IO storm and promote performance, but they all have their own pros and cons. ZSWAP It is implemented based on frontswap API and it uses a dynamic allocater named Zbud to allocate free pages. Zbud means pairs of zpages are "buddied" and it can only store at most two compressed pages in one page frame, so the max compress ratio is 50%. Each page frame is lru-linked and can do shink in memory pressure. If the compressed memory pool reach its limitation, shink or reclaim happens. It decompress the page frame into two new allocated pages and then write them to real swap device, but it can fail when allocating the two pages. ZRAM Acts as a compressed ramdisk and used as swap device, and it use zsmalloc as its allocator which has high density but may have fragmentation issues. Besides, page reclaim is hard since it will need more pages to uncompress and free just one page. ZRAM is preferred by embedded system which may not have any real swap device. Now both ZRAM and ZSWAP are in driver/staging tree, and in the mm community there are some disscussions of merging ZRAM into ZSWAP or viceversa, but no agreement yet. ZCACHE Handles file page compression but it is removed out of staging recently. From industry (led by Tang Jie, LSI) An LSI engineer introduced several new produces to us. The first is raid5/6 cards that it use full stripe writes to improve performance. The 2nd one he introduced is SandForce flash controller, who can understand data file types (data entropy) to reduce write amplification (WA) for nearly all writes. It's called DuraWrite and typical WA is 0.5. What's more, if enable its Dynamic Logical Capacity function module, the controller can do data compression which is transparent to upper layer. LSI testing shows that with this virtual capacity enables 1x TB drive can support up to 2x TB capacity, but the application must monitor free flash space to maintain optimal performance and to guard against free flash space exhaustion. He said the most useful application is for datebase. Another thing I think it's worth to mention is that a NV-DRAM memory in NMR/Raptor which is directly exposed to host system. Applications can directly access the NV-DRAM via a memory address - using standard system call mmap(). He said that it is very useful for database logging now. This kind of NVM produces are beginning to appear in recent years, and it is said that Samsung is building a research center in China for related produces. IMHO, NVM will bring an effect to current os layer especially on file system, e.g. its journaling may need to redesign to fully utilize these nonvolatile memory. OCFS2 (led by Canquan Shen) Without a doubt, HuaWei is the biggest contributor to OCFS2 in the past two years. They have posted 46 upstream patches and 39 patches have been merged. Their current project is based on 32/64 nodes cluster, but they also tried 128 nodes at the experimental stage. The major work they are working is to support ATS (atomic test and set), it can be works with DLM at the same time. Looks this idea is inspired by the vmware VMFS locking, i.e, http://blogs.vmware.com/vsphere/2012/05/vmfs-locking-uncovered.html CLK - 18th October 2013 Improving Linux Development with Better Tools (Andi Kleen) This talk focused on how to find/solve bugs along with the Linux complexity growing. Generally, we can do this with the following kind of tools: Static code checkers tools. e.g, sparse, smatch, coccinelle, clang checker, checkpatch, gcc -W/LTO, stanse. This can help check a lot of things, simple mistakes, complex problems, but the challenges are: some are very slow, false positives, may need a concentrated effort to get false positives down. Especially, no static checker I found can follow indirect calls (“OO in C”, common in kernel): struct foo_ops { int (*do_foo)(struct foo *obj); } foo->do_foo(foo); Dynamic runtime checkers, e.g, thread checkers, kmemcheck, lockdep. Ideally all kernel code would come with a test suite, then someone could run all the dynamic checkers. Fuzzers/test suites. e.g, Trinity is a great tool, it finds many bugs, but needs manual model for each syscall. Modern fuzzers around using automatic feedback, but notfor kernel yet: http://taviso.decsystem.org/making_software_dumber.pdf Debuggers/Tracers to understand code, e.g, ftrace, can dump on events/oops/custom triggers, but still too much overhead in many cases to run always during debug. Tools to read/understand source, e.g, grep/cscope work great for many cases, but do not understand indirect pointers (OO in C model used in kernel), give us all “do_foo” instances: struct foo_ops { int (*do_foo)(struct foo *obj); } = { .do_foo = my_foo }; foo>do_foo(foo); That would be great to have a cscope like tool that understands this based on types/initializers XFS: The High Performance Enterprise File System (Jeff Liu) [slides] I gave a talk for introducing the disk layout, unique features, as well as the recent changes.   The slides include some charts to reflect the performances between XFS/Btrfs/Ext4 for small files. About a dozen users raised their hands when I asking who has experienced with XFS. I remembered that when I asked the same question in LinuxCon/Japan, only 3 people raised their hands, but they are Chris Mason, Ric Wheeler, and another attendee. The attendee questions were mainly focused on stability, and comparison with other file systems. Linux Containers (Feng Gao) The speaker introduced us that the purpose for those kind of namespaces, include mount/UTS/IPC/Network/Pid/User, as well as the system API/ABI. For the userspace tools, He mainly focus on the Libvirt LXC rather than us(LXC). Libvirt LXC is another userspace container management tool, implemented as one type of libvirt driver, it can manage containers, create namespace, create private filesystem layout for container, Create devices for container and setup resources controller via cgroup. In this talk, Feng also mentioned another two possible new namespaces in the future, the 1st is the audit, but not sure if it should be assigned to user namespace or not. Another is about syslog, but the question is do we really need it? In-memory Compression (Bob Liu) Same as CLSF, a nice introduction that I have already mentioned above. Misc There were some other talks related to ACPI based memory hotplug, smart wake-affinity in scheduler etc., but my head is not big enough to record all those things. -- Jeff Liu

    Read the article

  • Windows Azure Service Bus Splitter and Aggregator

    - by Alan Smith
    This article will cover basic implementations of the Splitter and Aggregator patterns using the Windows Azure Service Bus. The content will be included in the next release of the “Windows Azure Service Bus Developer Guide”, along with some other patterns I am working on. I’ve taken the pattern descriptions from the book “Enterprise Integration Patterns” by Gregor Hohpe. I bought a copy of the book in 2004, and recently dusted it off when I started to look at implementing the patterns on the Windows Azure Service Bus. Gregor has also presented an session in 2011 “Enterprise Integration Patterns: Past, Present and Future” which is well worth a look. I’ll be covering more patterns in the coming weeks, I’m currently working on Wire-Tap and Scatter-Gather. There will no doubt be a section on implementing these patterns in my “SOA, Connectivity and Integration using the Windows Azure Service Bus” course. There are a number of scenarios where a message needs to be divided into a number of sub messages, and also where a number of sub messages need to be combined to form one message. The splitter and aggregator patterns provide a definition of how this can be achieved. This section will focus on the implementation of basic splitter and aggregator patens using the Windows Azure Service Bus direct programming model. In BizTalk Server receive pipelines are typically used to implement the splitter patterns, with sequential convoy orchestrations often used to aggregate messages. In the current release of the Service Bus, there is no functionality in the direct programming model that implements these patterns, so it is up to the developer to implement them in the applications that send and receive messages. Splitter A message splitter takes a message and spits the message into a number of sub messages. As there are different scenarios for how a message can be split into sub messages, message splitters are implemented using different algorithms. The Enterprise Integration Patterns book describes the splatter pattern as follows: How can we process a message if it contains multiple elements, each of which may have to be processed in a different way? Use a Splitter to break out the composite message into a series of individual messages, each containing data related to one item. The Enterprise Integration Patterns website provides a description of the Splitter pattern here. In some scenarios a batch message could be split into the sub messages that are contained in the batch. The splitting of a message could be based on the message type of sub-message, or the trading partner that the sub message is to be sent to. Aggregator An aggregator takes a stream or related messages and combines them together to form one message. The Enterprise Integration Patterns book describes the aggregator pattern as follows: How do we combine the results of individual, but related messages so that they can be processed as a whole? Use a stateful filter, an Aggregator, to collect and store individual messages until a complete set of related messages has been received. Then, the Aggregator publishes a single message distilled from the individual messages. The Enterprise Integration Patterns website provides a description of the Aggregator pattern here. A common example of the need for an aggregator is in scenarios where a stream of messages needs to be combined into a daily batch to be sent to a legacy line-of-business application. The BizTalk Server EDI functionality provides support for batching messages in this way using a sequential convoy orchestration. Scenario The scenario for this implementation of the splitter and aggregator patterns is the sending and receiving of large messages using a Service Bus queue. In the current release, the Windows Azure Service Bus currently supports a maximum message size of 256 KB, with a maximum header size of 64 KB. This leaves a safe maximum body size of 192 KB. The BrokeredMessage class will support messages larger than 256 KB; in fact the Size property is of type long, implying that very large messages may be supported at some point in the future. The 256 KB size restriction is set in the service bus components that are deployed in the Windows Azure data centers. One of the ways of working around this size restriction is to split large messages into a sequence of smaller sub messages in the sending application, send them via a queue, and then reassemble them in the receiving application. This scenario will be used to demonstrate the pattern implementations. Implementation The splitter and aggregator will be used to provide functionality to send and receive large messages over the Windows Azure Service Bus. In order to make the implementations generic and reusable they will be implemented as a class library. The splitter will be implemented in the LargeMessageSender class and the aggregator in the LargeMessageReceiver class. A class diagram showing the two classes is shown below. Implementing the Splitter The splitter will take a large brokered message, and split the messages into a sequence of smaller sub-messages that can be transmitted over the service bus messaging entities. The LargeMessageSender class provides a Send method that takes a large brokered message as a parameter. The implementation of the class is shown below; console output has been added to provide details of the splitting operation. public class LargeMessageSender {     private static int SubMessageBodySize = 192 * 1024;     private QueueClient m_QueueClient;       public LargeMessageSender(QueueClient queueClient)     {         m_QueueClient = queueClient;     }       public void Send(BrokeredMessage message)     {         // Calculate the number of sub messages required.         long messageBodySize = message.Size;         int nrSubMessages = (int)(messageBodySize / SubMessageBodySize);         if (messageBodySize % SubMessageBodySize != 0)         {             nrSubMessages++;         }           // Create a unique session Id.         string sessionId = Guid.NewGuid().ToString();         Console.WriteLine("Message session Id: " + sessionId);         Console.Write("Sending {0} sub-messages", nrSubMessages);           Stream bodyStream = message.GetBody<Stream>();         for (int streamOffest = 0; streamOffest < messageBodySize;             streamOffest += SubMessageBodySize)         {                                     // Get the stream chunk from the large message             long arraySize = (messageBodySize - streamOffest) > SubMessageBodySize                 ? SubMessageBodySize : messageBodySize - streamOffest;             byte[] subMessageBytes = new byte[arraySize];             int result = bodyStream.Read(subMessageBytes, 0, (int)arraySize);             MemoryStream subMessageStream = new MemoryStream(subMessageBytes);               // Create a new message             BrokeredMessage subMessage = new BrokeredMessage(subMessageStream, true);             subMessage.SessionId = sessionId;               // Send the message             m_QueueClient.Send(subMessage);             Console.Write(".");         }         Console.WriteLine("Done!");     }} The LargeMessageSender class is initialized with a QueueClient that is created by the sending application. When the large message is sent, the number of sub messages is calculated based on the size of the body of the large message. A unique session Id is created to allow the sub messages to be sent as a message session, this session Id will be used for correlation in the aggregator. A for loop in then used to create the sequence of sub messages by creating chunks of data from the stream of the large message. The sub messages are then sent to the queue using the QueueClient. As sessions are used to correlate the messages, the queue used for message exchange must be created with the RequiresSession property set to true. Implementing the Aggregator The aggregator will receive the sub messages in the message session that was created by the splitter, and combine them to form a single, large message. The aggregator is implemented in the LargeMessageReceiver class, with a Receive method that returns a BrokeredMessage. The implementation of the class is shown below; console output has been added to provide details of the splitting operation.   public class LargeMessageReceiver {     private QueueClient m_QueueClient;       public LargeMessageReceiver(QueueClient queueClient)     {         m_QueueClient = queueClient;     }       public BrokeredMessage Receive()     {         // Create a memory stream to store the large message body.         MemoryStream largeMessageStream = new MemoryStream();           // Accept a message session from the queue.         MessageSession session = m_QueueClient.AcceptMessageSession();         Console.WriteLine("Message session Id: " + session.SessionId);         Console.Write("Receiving sub messages");           while (true)         {             // Receive a sub message             BrokeredMessage subMessage = session.Receive(TimeSpan.FromSeconds(5));               if (subMessage != null)             {                 // Copy the sub message body to the large message stream.                 Stream subMessageStream = subMessage.GetBody<Stream>();                 subMessageStream.CopyTo(largeMessageStream);                   // Mark the message as complete.                 subMessage.Complete();                 Console.Write(".");             }             else             {                 // The last message in the sequence is our completeness criteria.                 Console.WriteLine("Done!");                 break;             }         }                     // Create an aggregated message from the large message stream.         BrokeredMessage largeMessage = new BrokeredMessage(largeMessageStream, true);         return largeMessage;     } }   The LargeMessageReceiver initialized using a QueueClient that is created by the receiving application. The receive method creates a memory stream that will be used to aggregate the large message body. The AcceptMessageSession method on the QueueClient is then called, which will wait for the first message in a message session to become available on the queue. As the AcceptMessageSession can throw a timeout exception if no message is available on the queue after 60 seconds, a real-world implementation should handle this accordingly. Once the message session as accepted, the sub messages in the session are received, and their message body streams copied to the memory stream. Once all the messages have been received, the memory stream is used to create a large message, that is then returned to the receiving application. Testing the Implementation The splitter and aggregator are tested by creating a message sender and message receiver application. The payload for the large message will be one of the webcast video files from http://www.cloudcasts.net/, the file size is 9,697 KB, well over the 256 KB threshold imposed by the Service Bus. As the splitter and aggregator are implemented in a separate class library, the code used in the sender and receiver console is fairly basic. The implementation of the main method of the sending application is shown below.   static void Main(string[] args) {     // Create a token provider with the relevant credentials.     TokenProvider credentials =         TokenProvider.CreateSharedSecretTokenProvider         (AccountDetails.Name, AccountDetails.Key);       // Create a URI for the serivce bus.     Uri serviceBusUri = ServiceBusEnvironment.CreateServiceUri         ("sb", AccountDetails.Namespace, string.Empty);       // Create the MessagingFactory     MessagingFactory factory = MessagingFactory.Create(serviceBusUri, credentials);       // Use the MessagingFactory to create a queue client     QueueClient queueClient = factory.CreateQueueClient(AccountDetails.QueueName);       // Open the input file.     FileStream fileStream = new FileStream(AccountDetails.TestFile, FileMode.Open);       // Create a BrokeredMessage for the file.     BrokeredMessage largeMessage = new BrokeredMessage(fileStream, true);       Console.WriteLine("Sending: " + AccountDetails.TestFile);     Console.WriteLine("Message body size: " + largeMessage.Size);     Console.WriteLine();         // Send the message with a LargeMessageSender     LargeMessageSender sender = new LargeMessageSender(queueClient);     sender.Send(largeMessage);       // Close the messaging facory.     factory.Close();  } The implementation of the main method of the receiving application is shown below. static void Main(string[] args) {       // Create a token provider with the relevant credentials.     TokenProvider credentials =         TokenProvider.CreateSharedSecretTokenProvider         (AccountDetails.Name, AccountDetails.Key);       // Create a URI for the serivce bus.     Uri serviceBusUri = ServiceBusEnvironment.CreateServiceUri         ("sb", AccountDetails.Namespace, string.Empty);       // Create the MessagingFactory     MessagingFactory factory = MessagingFactory.Create(serviceBusUri, credentials);       // Use the MessagingFactory to create a queue client     QueueClient queueClient = factory.CreateQueueClient(AccountDetails.QueueName);       // Create a LargeMessageReceiver and receive the message.     LargeMessageReceiver receiver = new LargeMessageReceiver(queueClient);     BrokeredMessage largeMessage = receiver.Receive();       Console.WriteLine("Received message");     Console.WriteLine("Message body size: " + largeMessage.Size);       string testFile = AccountDetails.TestFile.Replace(@"\In\", @"\Out\");     Console.WriteLine("Saving file: " + testFile);       // Save the message body as a file.     Stream largeMessageStream = largeMessage.GetBody<Stream>();     largeMessageStream.Seek(0, SeekOrigin.Begin);     FileStream fileOut = new FileStream(testFile, FileMode.Create);     largeMessageStream.CopyTo(fileOut);     fileOut.Close();       Console.WriteLine("Done!"); } In order to test the application, the sending application is executed, which will use the LargeMessageSender class to split the message and place it on the queue. The output of the sender console is shown below. The console shows that the body size of the large message was 9,929,365 bytes, and the message was sent as a sequence of 51 sub messages. When the receiving application is executed the results are shown below. The console application shows that the aggregator has received the 51 messages from the message sequence that was creating in the sending application. The messages have been aggregated to form a massage with a body of 9,929,365 bytes, which is the same as the original large message. The message body is then saved as a file. Improvements to the Implementation The splitter and aggregator patterns in this implementation were created in order to show the usage of the patterns in a demo, which they do quite well. When implementing these patterns in a real-world scenario there are a number of improvements that could be made to the design. Copying Message Header Properties When sending a large message using these classes, it would be great if the message header properties in the message that was received were copied from the message that was sent. The sending application may well add information to the message context that will be required in the receiving application. When the sub messages are created in the splitter, the header properties in the first message could be set to the values in the original large message. The aggregator could then used the values from this first sub message to set the properties in the message header of the large message during the aggregation process. Using Asynchronous Methods The current implementation uses the synchronous send and receive methods of the QueueClient class. It would be much more performant to use the asynchronous methods, however doing so may well affect the sequence in which the sub messages are enqueued, which would require the implementation of a resequencer in the aggregator to restore the correct message sequence. Handling Exceptions In order to keep the code readable no exception handling was added to the implementations. In a real-world scenario exceptions should be handled accordingly.

    Read the article

  • Problems with data driven testing in MSTest

    - by severj3
    Hello, I am trying to get data driven testing to work in C# with MSTest/Selenium. Here is a sample of some of my code trying to set it up: [TestClass] public class NewTest { private ISelenium selenium; private StringBuilder verificationErrors; [DeploymentItem("GoogleTestData.xls")] [DataSource("System.Data.OleDb", "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=GoogleTestData.xls;Persist Security Info=False;Extended Properties='Excel 8.0'", "TestSearches$", DataAccessMethod.Sequential)] [TestMethod] public void GoogleTest() { selenium = new DefaultSelenium("localhost", 4444, "*iehta", http://www.google.com); selenium.Start(); verificationErrors = new StringBuilder(); var searchingTerm = TestContext.DataRow["SearchingString"].ToString(); var expectedResult = TestContext.DataRow["ExpectedTextResults"].ToString(); Here's my error: Error 3 An object reference is required for the non-static field, method, or property 'Microsoft.VisualStudio.TestTools.UnitTesting.TestContext.DataRow.get' E:\Projects\SeleniumProject\SeleniumProject\MaverickTest.cs 32 33 SeleniumProject The error is underlining the "TestContext.DataRow" part of both statements. I've really been struggling with this one, thanks!

    Read the article

  • Calling AuditQuerySystemPolicy() (advapi32.dll) from C# returns "The parameter is incorrect"

    - by JCCyC
    The sequence is like follows: Open a policy handle with LsaOpenPolicy() (not shown) Call LsaQueryInformationPolicy() to get the number of categories; For each category: Call AuditLookupCategoryGuidFromCategoryId() to turn the enum value into a GUID; Call AuditEnumerateSubCategories() to get a list of the GUIDs of all subcategories; Call AuditQuerySystemPolicy() to get the audit policies for the subcategories. All of these work and return expected, sensible values except the last. Calling AuditQuerySystemPolicy() gets me a "The parameter is incorrect" error. I'm thinking there must be some subtle unmarshaling problem. I'm probably misinterpreting what exactly AuditEnumerateSubCategories() returns, but I'm stumped. You'll see (commented) I tried to dereference the return pointer from AuditEnumerateSubCategories() as a pointer. Doing or not doing that gives the same result. Code: #region LSA types public enum POLICY_INFORMATION_CLASS { PolicyAuditLogInformation = 1, PolicyAuditEventsInformation, PolicyPrimaryDomainInformation, PolicyPdAccountInformation, PolicyAccountDomainInformation, PolicyLsaServerRoleInformation, PolicyReplicaSourceInformation, PolicyDefaultQuotaInformation, PolicyModificationInformation, PolicyAuditFullSetInformation, PolicyAuditFullQueryInformation, PolicyDnsDomainInformation } public enum POLICY_AUDIT_EVENT_TYPE { AuditCategorySystem, AuditCategoryLogon, AuditCategoryObjectAccess, AuditCategoryPrivilegeUse, AuditCategoryDetailedTracking, AuditCategoryPolicyChange, AuditCategoryAccountManagement, AuditCategoryDirectoryServiceAccess, AuditCategoryAccountLogon } [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] public struct POLICY_AUDIT_EVENTS_INFO { public bool AuditingMode; public IntPtr EventAuditingOptions; public UInt32 MaximumAuditEventCount; } [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] public struct GUID { public UInt32 Data1; public UInt16 Data2; public UInt16 Data3; public Byte Data4a; public Byte Data4b; public Byte Data4c; public Byte Data4d; public Byte Data4e; public Byte Data4f; public Byte Data4g; public Byte Data4h; public override string ToString() { return Data1.ToString("x8") + "-" + Data2.ToString("x4") + "-" + Data3.ToString("x4") + "-" + Data4a.ToString("x2") + Data4b.ToString("x2") + "-" + Data4c.ToString("x2") + Data4d.ToString("x2") + Data4e.ToString("x2") + Data4f.ToString("x2") + Data4g.ToString("x2") + Data4h.ToString("x2"); } } #endregion #region LSA Imports [DllImport("kernel32.dll")] extern static int GetLastError(); [DllImport("advapi32.dll", CharSet = CharSet.Unicode, PreserveSig = true)] public static extern UInt32 LsaNtStatusToWinError( long Status); [DllImport("advapi32.dll", CharSet = CharSet.Unicode, PreserveSig = true)] public static extern long LsaOpenPolicy( ref LSA_UNICODE_STRING SystemName, ref LSA_OBJECT_ATTRIBUTES ObjectAttributes, Int32 DesiredAccess, out IntPtr PolicyHandle ); [DllImport("advapi32.dll", CharSet = CharSet.Unicode, PreserveSig = true)] public static extern long LsaClose(IntPtr PolicyHandle); [DllImport("advapi32.dll", CharSet = CharSet.Unicode, PreserveSig = true)] public static extern long LsaFreeMemory(IntPtr Buffer); [DllImport("advapi32.dll", CharSet = CharSet.Unicode, PreserveSig = true)] public static extern void AuditFree(IntPtr Buffer); [DllImport("advapi32.dll", SetLastError = true, PreserveSig = true)] public static extern long LsaQueryInformationPolicy( IntPtr PolicyHandle, POLICY_INFORMATION_CLASS InformationClass, out IntPtr Buffer); [DllImport("advapi32.dll", SetLastError = true, PreserveSig = true)] public static extern bool AuditLookupCategoryGuidFromCategoryId( POLICY_AUDIT_EVENT_TYPE AuditCategoryId, IntPtr pAuditCategoryGuid); [DllImport("advapi32.dll", SetLastError = true, PreserveSig = true)] public static extern bool AuditEnumerateSubCategories( IntPtr pAuditCategoryGuid, bool bRetrieveAllSubCategories, out IntPtr ppAuditSubCategoriesArray, out ulong pCountReturned); [DllImport("advapi32.dll", SetLastError = true, PreserveSig = true)] public static extern bool AuditQuerySystemPolicy( IntPtr pSubCategoryGuids, ulong PolicyCount, out IntPtr ppAuditPolicy); #endregion Dictionary<string, UInt32> retList = new Dictionary<string, UInt32>(); long lretVal; uint retVal; IntPtr pAuditEventsInfo; lretVal = LsaQueryInformationPolicy(policyHandle, POLICY_INFORMATION_CLASS.PolicyAuditEventsInformation, out pAuditEventsInfo); retVal = LsaNtStatusToWinError(lretVal); if (retVal != 0) { LsaClose(policyHandle); throw new System.ComponentModel.Win32Exception((int)retVal); } POLICY_AUDIT_EVENTS_INFO myAuditEventsInfo = new POLICY_AUDIT_EVENTS_INFO(); myAuditEventsInfo = (POLICY_AUDIT_EVENTS_INFO)Marshal.PtrToStructure(pAuditEventsInfo, myAuditEventsInfo.GetType()); IntPtr subCats = IntPtr.Zero; ulong nSubCats = 0; for (int audCat = 0; audCat < myAuditEventsInfo.MaximumAuditEventCount; audCat++) { GUID audCatGuid = new GUID(); if (!AuditLookupCategoryGuidFromCategoryId((POLICY_AUDIT_EVENT_TYPE)audCat, new IntPtr(&audCatGuid))) { int causingError = GetLastError(); LsaFreeMemory(pAuditEventsInfo); LsaClose(policyHandle); throw new System.ComponentModel.Win32Exception(causingError); } if (!AuditEnumerateSubCategories(new IntPtr(&audCatGuid), true, out subCats, out nSubCats)) { int causingError = GetLastError(); LsaFreeMemory(pAuditEventsInfo); LsaClose(policyHandle); throw new System.ComponentModel.Win32Exception(causingError); } // Dereference the first pointer-to-pointer to point to the first subcategory // subCats = (IntPtr)Marshal.PtrToStructure(subCats, subCats.GetType()); if (nSubCats > 0) { IntPtr audPolicies = IntPtr.Zero; if (!AuditQuerySystemPolicy(subCats, nSubCats, out audPolicies)) { int causingError = GetLastError(); if (subCats != IntPtr.Zero) AuditFree(subCats); LsaFreeMemory(pAuditEventsInfo); LsaClose(policyHandle); throw new System.ComponentModel.Win32Exception(causingError); } AUDIT_POLICY_INFORMATION myAudPol = new AUDIT_POLICY_INFORMATION(); for (ulong audSubCat = 0; audSubCat < nSubCats; audSubCat++) { // Process audPolicies[audSubCat], turn GUIDs into names, fill retList. // http://msdn.microsoft.com/en-us/library/aa373931%28VS.85%29.aspx // http://msdn.microsoft.com/en-us/library/bb648638%28VS.85%29.aspx IntPtr itemAddr = IntPtr.Zero; IntPtr itemAddrAddr = new IntPtr(audPolicies.ToInt64() + (long)(audSubCat * (ulong)Marshal.SizeOf(itemAddr))); itemAddr = (IntPtr)Marshal.PtrToStructure(itemAddrAddr, itemAddr.GetType()); myAudPol = (AUDIT_POLICY_INFORMATION)Marshal.PtrToStructure(itemAddr, myAudPol.GetType()); retList[myAudPol.AuditSubCategoryGuid.ToString()] = myAudPol.AuditingInformation; } if (audPolicies != IntPtr.Zero) AuditFree(audPolicies); } if (subCats != IntPtr.Zero) AuditFree(subCats); subCats = IntPtr.Zero; nSubCats = 0; } lretVal = LsaFreeMemory(pAuditEventsInfo); retVal = LsaNtStatusToWinError(lretVal); if (retVal != 0) throw new System.ComponentModel.Win32Exception((int)retVal); lretVal = LsaClose(policyHandle); retVal = LsaNtStatusToWinError(lretVal); if (retVal != 0) throw new System.ComponentModel.Win32Exception((int)retVal);

    Read the article

  • Integer ID obfuscation techniques

    - by Chris
    Hi there, I'm looking for an easy and reversible method of obfuscating integer IDs. Ideally, I'd want the resulting obfuscation to be at most eight characters in length and non-sequential, meaning that the obfuscation of "1" should look nothing like the obfuscation for "2" and so on. This isn't meant to be secure by any means, so this isn't a huge concern. Additionally, the integers I'll be obfuscating aren't large - between one and 10,000 - but I don't want any collisions, either. Does anybody have any ideas for something that would fit this criteria? Thanks! Chris

    Read the article

  • What is your development checklist for Java low-latency application?

    - by user49767
    I would like to create comprehensive checklist for Java low latency application. Can you add your checklist here? Here is my list 1. Make your objects immutable 2. Try to reduce synchronized method 3. Locking order should be well documented, and handled carefully 4. Use profiler 5. Use Amdhal's law, and find the sequential execution path 6. Use Java 5 concurrency utilities, and locks 7. Avoid Thread priorities as they are platform dependent 8. JVM warmup can be used As per my definition, low-latency application is tuned for every Milli-seconds.

    Read the article

  • Python (pdb) - Queueing up commands to execute

    - by kpatelPro
    I am implementing a "breakpoint" system for use in my Python development that will allow me to call a function that, in essence, calls pdb.set_trace(); Some of the functionality that I would like to implement requires me to control pdb from code while I am within a set_trace context. Example: disableList = [] def breakpoint(name=None): def d(): disableList.append(name) #**** #issue 'run' command to pdb so user #does not have to type 'c' #**** if name in disableList: return print "Use d() to disable breakpoint, 'c' to continue" pdb.set_trace(); In the above example, how do I implement the comments demarked by the #**** ? In other parts of this system, I would like to issue an 'up' command, or two sequential 'up' commands without leaving the pdb session (so the user ends up at a pdb prompt but up two levels on the call stack. Thanks!

    Read the article

  • Workflow Foundation 4 - DeclarativeServiceLibrary - Error while calling second ReceiveAndSendReply

    - by dotnetexperiments
    Hi, I have created a DeclarativeServiceLibrary using VS2010 beta 2, Please check this image of Sequential Service Following is the code used to call these two activities ` int? data = 123; ServiceReference1.ServiceClient client1 = new ServiceReference1.ServiceClient(); string result1 = client1.GetData(data); //This line shows error :( string result2 = client1.Operation1(); Response.Write(result1 + " :: ::" + result2);` client1.GetData works perfectly, but client1.Operation1 show the following error. Please let me know how to fix this. There is no context attached to the incoming message for the service and the current operation is not marked with "CanCreateInstance = true". In order to communicate with this service check whether the incoming binding supports the context protocol and has a valid context initialized.

    Read the article

  • Breakpoints are ignored when debugging in Visual Studio 2008 on 64-bit system

    - by Arnold Zokas
    I'm trying to debug an ASP.NET web application in this environment: Windows Server Standard 2008 SP2 x64 Single-core CPU 4GB RAM Visual Studio 2008 with Remote Debugger SP1 .NET 3.5 Web Application running in IIS in 64-bit mode The code I am trying to debug is a simple event handler with some basic sequential code. What I observe is that breakpoints get randomly ignored and VS often exits debug mode when I try to step through the code. The code is compiled in debug mode. I've tried all the usual steps: iisreset, reboot VM, rebuild solution, reinstall Visual Studio. Any ideas?

    Read the article

  • Python "Every Other Element" Idiom

    - by Matt Luongo
    Hey guys, I feel like I spend a lot of time writing code in Python, but not enough time creating Pythonic code. Recently I ran into a funny little problem that I thought might have an easy, idiomatic solution. Paraphrasing the original, I needed to collect every sequential pair in a list. For example, given the list [1,2,3,4,5,6], I wanted to compute [(1,2),(3,4),(5,6)]. I came up with a quick solution at the time that looked like translated Java. Revisiting the question, the best I could do was l = [1,2,3,4,5,6] [(l[2*x],l[2*x+1]) for x in range(len(l)/2)] which has the side effect of tossing out the last number in the case that the length isn't even. Is there a more idiomatic approach that I'm missing, or is this the best I'm going to get?

    Read the article

  • Hadoop: Iterative MapReduce Performance

    - by S.N
    Is it correct to say that the parallel computation with iterative MapReduce can be justified only when the training data size is too large for the non-parallel computation for the same logic? I am aware that the there is overhead for starting MapReduce jobs. This can be critical for overall execution time when a large number of iterations is required. I can imagine that the sequential computation is faster than the parallel computation with iterative MapReduce as long as the memory allows to hold a data set in many cases. Is it the only benefit to use the iterative MapReduce? If not, what are the other benefits could be?

    Read the article

  • Are stack based arrays possible in C#?

    - by Bob
    Let's say, hypothetically (read: I don't think I actually need this, but I am curious as the idea popped into my head), one wanted an array of memory set aside locally on the stack, not on the heap. For instance, something like this: private void someFunction() { int[20] stackArray; //C style; I know the size and it's set in stone } I'm guessing the answer is no. All I've been able to find is heap based arrays. If someone were to need this, would there be any workarounds? Is there any way to set aside a certain amount of sequential memory in a "value type" way? Or are structs with named parameters the only way (like the way the Matrix struct in XNA has 16 named parameters (M11-M44))?

    Read the article

  • openmp in mex : stackoverflow error

    - by Edwin
    i have got the following fraction of code that getting me the stack overflow error #pragma omp parallel shared(Mo1, Mo2, sum_normalized_p_gn, Data, Mean_Out,Covar_Out,Prior_Out, det) private(i) num_threads( number_threads ) { //every thread has a new copy double* normalized_p_gn = (double*)malloc(NMIX*sizeof(double)); #pragma omp critical { int id = omp_get_thread_num(); int threads = omp_get_num_threads(); mexEvalString("drawnow"); } #pragma omp for //some parallel process..... } the variables declared in the shared are created by malloc. and they consumes with large amount of memory there are 2 questions regarding to the above code. 1) why this would generate the stack overflow error( i.e. segmentation fault) before it goes into the parallel for loop? it works fine when it runs in the sequential mode.... 2) am i right to dynamic allocate memory for each thread like "normalized_p_gn" above? Regards Edwin

    Read the article

  • Laws of Computer Science and Programming

    - by Jonas
    We have Amdahl's law that basically states that if your program is 10% sequential you can get a maximum 10x performance boost by parallelizing your application. Another one is Wadler's law which states that In any language design, the total time spent discussing a feature in this list is proportional to two raised to the power of its position. 0. Semantics 1. Syntax 2. Lexical syntax 3. Lexical syntax of comments My question is this: What are the most important (or at least significant / funny but true / sad but true) laws of Computer Science and programming? I want named laws, and not random theorems, So an answer should look something like Surname's (law|theorem|conjecture|corollary...) Please state the law in your answer, and not only a link. Edit: The name of the law does not need to contain it's inventors surname. But I do want to know who stated (and perhaps proved) the law

    Read the article

  • WCF Transaction Scope SQL insert table lock

    - by lihnid
    Hi All, I have two services talking to two different Data-stores (i.e SQL). I am using transactionscope: eg: using(TransactionScope scope = new TransactionScope()) { service1.InsertUser(user);//Insert to SQL Service 1 table User service2.SavePayment(payment);//Save payment SQL Service 2 table payment scope.Complete(); } Service1 is locking the table (User) until the transaction is completed making subsequent transactions with that table sequential. Is there a way to overcome the lock, so can have more than one concurrent calls to the SQL service1 table while the above code is executing? I would appreciate any input. Thanks in Advance. Lihnid

    Read the article

  • Document: How do I include a random fragment from another document

    - by NXT
    I have a word document that I am converting to docbook. This word document has short sections that are quoted from another document. I thinking that I should use a blockquote or sidebar to contain the quoted document. But how do I go about making random headings in non sequential order? For example I might be quoting a section of a document that looks like this: -------------------------------------- B. Damage Assessment blah...paragraphs B.1 Responsibilities. some stuff... B.2 Authority some more stuff... -------------------------------------- Those headings are totally unrelated to the flow of the containing document. Is this an appropriate place to use the bridgehead element?

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >