Search Results

Search found 577 results on 24 pages for 'andi sf'.

Page 18/24 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Using EhCache for session.createCriteria(...).list()

    - by James Smith
    I'm benchmarking the performance gains from using a 2nd level cache in Hibernate (enabling EhCache), but it doesn't seem to improve performance. In fact, the time to perform the query slightly increases. The query is: session.createCriteria(MyEntity.class).list(); The entity is: @Entity @Cache(usage = CacheConcurrencyStrategy.NONSTRICT_READ_WRITE) public class MyEntity { @Id @GeneratedValue private long id; @Column(length=5000) private String data; //---SNIP getters and setters--- } My hibernate.cfg.xml is: <!-- all the normal stuff to get it to connect & map the entities plus:--> <property name="hibernate.cache.region.factory_class"> net.sf.ehcache.hibernate.EhCacheRegionFactory </property> The MyEntity table contains about 2000 rows. The problem is that before adding in the cache, the query above to list all entities took an average of 65 ms. After the cache, they take an average of 74 ms. Is there something I'm missing? Is there something extra that needs to be done that will increase performance?

    Read the article

  • x86 Assembly: Before Making a System Call on Linux Should You Save All Registers?

    - by mudge
    I have the below code that opens up a file, reads it into a buffer and then closes the file. The close file system call requires that the file descriptor number be in the ebx register. The ebx register gets the file descriptor number before the read system call is made. My question is should I save the ebx register on the stack or somewhere before I make the read system call, (could int 80h trash the ebx register?). And then restore the ebx register for the close system call? Or is the code I have below fine and safe? I have run the below code and it works, I'm just not sure if it is generally considered good assembly practice or not because I don't save the ebx register before the int 80h read call. ;; open up the input file mov eax,5 ; open file system call number mov ebx,[esp+8] ; null terminated string file name, first command line parameter mov ecx,0o ; access type: O_RDONLY int 80h ; file handle or negative error number put in eax test eax,eax js Error ; test sign flag (SF) for negative number which signals error ;; read in the full input file mov ebx,eax ; assign input file descripter mov eax,3 ; read system call number mov ecx,InputBuff ; buffer to read into mov edx,INPUT_BUFF_LEN ; total bytes to read int 80h test eax,eax js Error ; if eax is negative then error jz Error ; if no bytes were read then error add eax,InputBuff ; add size of input to the begining of InputBuff location mov [InputEnd],eax ; assign address of end of input ;; close the input file ;; file descripter is already in ebx mov eax,6 ; close file system call number int 80h

    Read the article

  • NAnt and relative path execution

    - by stacker
    How can I assign to trunk.dir property a relative path to the trunk location? This is my nant.build file: <?xml version="1.0" encoding="utf-8"?> <project name="ProjectName" default="build" xmlns="http://nant.sf.net/release/0.85/nant.xsd"> <!-- Directories --> <property name="trunk.dir" value="C:\Projects\ProjectName" /><!-- I want relative path over here! --> <property name="source.dir" value="${trunk.dir}src\" /> <!-- Working Files --> <property name="msbuild.exe" value="C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\msbuild.exe" /> <property name="solution.sln" value="${source.dir}ProjectName.sln" /> <!-- Called Externally --> <target name="compile"> <!-- Rebuild foces msbuild to clean and build --> <exec program="${msbuild.exe}" commandline="${solution.sln} /t:Rebuild /v:q" /> </target> </project>

    Read the article

  • Create cross platform Java SWT Application

    - by mchr
    I have written a Java GUI using SWT. I package the application using an ANT script (fragment below). <jar destfile="./build/jars/swtgui.jar" filesetmanifest="mergewithoutmain"> <manifest> <attribute name="Main-Class" value="org.swtgui.MainGui" /> <attribute name="Class-Path" value="." /> </manifest> <fileset dir="./build/classes" includes="**/*.class" /> <zipfileset excludes="META-INF/*.SF" src="lib/org.eclipse.swt.win32.win32.x86_3.5.2.v3557f.jar" /> </jar> This produces a single jar which on Windows I can just double click to run my GUI. The downside is that I have had to explicitly package the windows SWT package into my jar. I would like to be able to run my application on other platforms (primarily Linux and OS X). The simplest way to do it would be to create platform specific jars which packaged the appropriate SWT files into separate JARs. Is there a better way to do this? Is it possible to create a single JAR which would run on multiple platforms?

    Read the article

  • Good Hosting Providers With Zend Framework Support

    - by manyxcxi
    I currently use ixwebhosting for my hosting services. They're cheap and work (most of the time). The databases are horribly slow, the servers are horribly slow, and their support (though usually prompt) is tough to deal with. That being said, they're cheap, I've got like 20 domains hosted in my account, none of them are high volume, and they work JUST good enough- until today. This isn't meant to be a condemnation of ixwh though. Their prices are very low for what they do offer and most things work just fine, most of the time. I need to be able to host web apps written with Zend Framework in a fairly easy fashion. The server performance can't be worse than what I've already had (a pretty low hurdle to clear), and I don't want to spend $30/mo. These are not money making websites- they're projects. My requirements are PHP 5.3, ZF support, MySQL databases, multiple domains- not much. Who should I look at, and who should I look out for? Also- I put this on SO instead of SF because of the Zend Framework specific requirement. If I'm wrong, do as you wish.

    Read the article

  • Four New Java Champions

    - by Tori Wieldt
    Four luminaries in the Java community have been selected as new Java Champions. The are Agnes Crepet, Lars Vogel, Yara Senger and Martijn Verburg. They were selected for their technical knowledge, leadership, inspiration, and tireless work for the community. Here is how they rock the Java world: Agnes Crepet Agnes Crepet (France) is a passionate technologist with over 11 years of software engineering experience, especially in the Java technologies, as a Developer, Architect, Consultant and Trainer. She has been using Java since 1999, implementing multiple kinds of applications (from 20 days to 10000 men days) for different business fields (banking, retail, and pharmacy). Currently she is a Java EE Architect for a French pharmaceutical company, the homeopathy world leader. She is also the co-founder, with other passionate Java developers, of a software company named Ninja Squad, dedicated to Software Craftsmanship. Agnes is the leader of two Java User Groups (JUG), the Lyon JUG Duchess France and the founder of the Mix-IT Conferenceand theCast-IT Podcast, two projects about Java and Agile Development. She speaks at Java and JUG conferences around the world and regularly writes articles about the Java Ecosystem for the French print Developer magazine Programmez! and for the Duchess Blog. Follow Agnes @agnes_crepet. Lars Vogel Lars Vogel (Germany) is the founder and CEO of the vogella GmbH and works as Java, Eclipse and Android consultant, trainer and book author. He is a regular speaker at international conferences, such as EclipseCon, Devoxx, Droidcon and O'Reilly's Android Open. With more than one million visitors per month, his website vogella.com is one of the central sources for Java, Eclipse and Android programming information. Lars is committer in the Eclipse project and received in 2010 the "Eclipse Top Contributor Award" and 2012 the "Eclipse Top Newcomer Evangelist Award." Follow Lars on Twitter @vogella. Yara Senger Yara Senger (Brazil) has been a tireless Java activist in Brazil for many years. She is President of SouJava and she is an alternate representative of the group on the JCP Executive Committee. Yara has led SouJava in many initiatives, from technical events to social activities. She is co-founder and director of GlobalCode, which trains developers throughout Brazil.  Last year, she was recipient of the Duke Choice's Award, for the JHome embedded environment.  Yara is also an active speaker, giving presentations in many countries, including JavaOne SF, JavaOne Latin Ameria, JavaOne India, JFokus, and JUGs throughout Brazil. Yara is editor of InfoQ Brasil and also frequently posts at http://blog.globalcode.com.br/search/label/Yara. Follow Yara @YaraSenger. Martijn Verburg Martijn Verburg (UK) is the CTO of jClarity (a Java/JVM performance cloud tooling start-up) and has over 12 years experience as a Java/JVM technology professional and OSS mentor in a variety of organisations from start-ups to large enterprises. He is the co-leader of the London Java Community (~2800 developers) and leads the global effort for the Java User Group "Adopt a JSR" and "Adopt OpenJDK" programmes. These programmes encourage day to day Java developer involvement with OpenJDK, Java standards (JSRs), an important relationship for keeping the Java ecosystem relevant to the 9 million Java developers out there today. As a leading expert on technical team optimisation, his talks and presentations are in high demand by major conferences (JavaOne, Devoxx, OSCON, QCon) where you'll often find him challenging the industry status quo via his alter ego "The Diabolical Developer." You can read more in the OTN ariticle "Challenging the Diabolical Developer: A Conversation with JavaOne Rock Star Martijn Verburg." Follow Martijn @karianna. The Java Champions are an exclusive group of passionate Java technology and community leaders who are community-nominated and selected under a project sponsored by Oracle. Java Champions get the opportunity to provide feedback, ideas, and direction that will help Oracle grow the Java Platform. Congratulations to these new Java Champions!

    Read the article

  • EnOcean -> USB Serial Communication (C++)

    - by regorianer
    I guess it is not the right place to ask here for enocean specific details, but maybe I am doing something wrong by using serial connections and you can help me no matter if there is knowledge about this technology or not. I have a problem to communicate with the RCM152 Module. I have written a C++ program to communicate with the RCM152 by emulating packets of the PTM 200. I teach the RCM152 to listen to the following packets: [06/19/12 04:21:44.546] INFO: SENDING BYTE : 55 <-- start byte [06/19/12 04:21:44.546] INFO: SENDING BYTE : 00 <-- head begin [06/19/12 04:21:44.546] INFO: SENDING BYTE : 07 [06/19/12 04:21:44.546] INFO: SENDING BYTE : 07 [06/19/12 04:21:44.546] INFO: SENDING BYTE : 01 <-- head end [06/19/12 04:21:44.546] INFO: SENDING BYTE : 7a <-- CRC Check [06/19/12 04:21:44.546] INFO: SENDING BYTE : f6 <-- packet type [06/19/12 04:21:44.546] INFO: SENDING BYTE : 20 <-- My action (00 and 10 -> OFF, 20 and 30 -> ON) [06/19/12 04:21:44.546] INFO: SENDING BYTE : 00 <-- serial.byte 3 [06/19/12 04:21:44.546] INFO: SENDING BYTE : 24 <-- serial.byte 2 [06/19/12 04:21:44.546] INFO: SENDING BYTE : 21 <-- serial.byte 1 [06/19/12 04:21:44.546] INFO: SENDING BYTE : 87 <-- serial.byte 0 [06/19/12 04:21:44.546] INFO: SENDING BYTE : 30 <-- status [06/19/12 04:21:44.546] INFO: SENDING BYTE : 03 <-- 03 for send, 01 for receiver [06/19/12 04:21:44.546] INFO: SENDING BYTE : ff <-- begin destination [06/19/12 04:21:44.546] INFO: SENDING BYTE : ff [06/19/12 04:21:44.546] INFO: SENDING BYTE : ff [06/19/12 04:21:44.546] INFO: SENDING BYTE : ff <-- end destination [06/19/12 04:21:44.546] INFO: SENDING BYTE : ff <-- Transmission quality (sender ff) [06/19/12 04:21:44.546] INFO: SENDING BYTE : 00 [06/19/12 04:21:44.547] INFO: SENDING BYTE : 10 <-- CRC Check A PTM200 Device or a SG-FUS-24-230 Device are sending equivalent packets like: [06/19/12 04:30:31.106] INFO: Received Byte: 55 [06/19/12 04:30:31.106] INFO: Received Byte: 00 [06/19/12 04:30:31.106] INFO: Received Byte: 07 [06/19/12 04:30:31.106] INFO: Received Byte: 07 [06/19/12 04:30:31.106] INFO: Received Byte: 01 [06/19/12 04:30:31.106] INFO: Received Byte: 7a [06/19/12 04:30:31.106] INFO: Received Byte: f6 [06/19/12 04:30:31.106] INFO: Received Byte: 40 [06/19/12 04:30:31.106] INFO: Received Byte: 00 [06/19/12 04:30:31.106] INFO: Received Byte: 24 [06/19/12 04:30:31.106] INFO: Received Byte: 6c [06/19/12 04:30:31.106] INFO: Received Byte: 2f [06/19/12 04:30:31.106] INFO: Received Byte: 30 [06/19/12 04:30:31.106] INFO: Received Byte: 01 [06/19/12 04:30:31.106] INFO: Received Byte: ff [06/19/12 04:30:31.106] INFO: Received Byte: ff [06/19/12 04:30:31.108] INFO: Received Byte: ff [06/19/12 04:30:31.108] INFO: Received Byte: ff [06/19/12 04:30:31.108] INFO: Received Byte: 37 [06/19/12 04:30:31.108] INFO: Received Byte: 00 [06/19/12 04:30:31.108] INFO: Received Byte: d1 I can control the device connected to the RCM152 like I want to with my sending packets (thats a good fact and means that the RCM152 has learned my packets and can use them. Also the actions (0x10 - ON, 0x30 - OFF) are working fine), but the problem is, that no matter which serial I choose, the RCM152 reacts to these packets. I only want to have actions if the teached-in serial is send and all other packets with different serials to be ignored. The RCM152 is not reacting to the packets sent by the PTM200 nor by the SG-FUS-24-230 because these are not teached-in. Thats exactly what I want to have with the packets created myself. What am I doing wrong? The libraries I am using are these for C++ http://pvbrowser.de/pvbrowser/sf/manual/rllib/html/ The enocean EEP says: For this purpose of a determined relationship between transmitter and receiver each transmitting device has a unique Sender-ID which is part of each radio telegram. The receiving device detects from the Sender-ID whether the device is known, i.e., was already learned, or unknown. A telegram with unknown Sender-ID is disregarded.

    Read the article

  • OpenGL textures trigger error 1281 if SFML is not called

    - by user3714670
    I am using SOIL to apply textures to VBOs, without textures i could change the background and display black (default color) vbos easily, but now with textures, openGL is giving an error 1281, the background is black and some textures are not applied. but the first texture IS applied (nothing else is working though). The strange thing is : if i create a dummy texture with SFML in the same program, all other textures do work. So i guess there is something i forgot in the texture creation/application, if someone could enlighten me. Here is the code i use to load textures, once loaded it is kept in memory, it mostly comes from the example of SOIL : texture = SOIL_load_OGL_single_cubemap( filename, SOIL_DDS_CUBEMAP_FACE_ORDER, SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_POWER_OF_TWO | SOIL_FLAG_MIPMAPS | SOIL_FLAG_DDS_LOAD_DIRECT ); if( texture > 0 ) { glEnable( GL_TEXTURE_CUBE_MAP ); glEnable( GL_TEXTURE_GEN_S ); glEnable( GL_TEXTURE_GEN_T ); glEnable( GL_TEXTURE_GEN_R ); glTexGeni( GL_S, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP ); glTexGeni( GL_T, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP ); glTexGeni( GL_R, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP ); glBindTexture( GL_TEXTURE_CUBE_MAP, texture ); std::cout << "the loaded single cube map ID was " << texture << std::endl; } else { std::cout << "Attempting to load as a HDR texture" << std::endl; texture = SOIL_load_OGL_HDR_texture( filename, SOIL_HDR_RGBdivA2, 0, SOIL_CREATE_NEW_ID, SOIL_FLAG_POWER_OF_TWO | SOIL_FLAG_MIPMAPS ); if( texture < 1 ) { std::cout << "Attempting to load as a simple 2D texture" << std::endl; texture = SOIL_load_OGL_texture( filename, SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_POWER_OF_TWO | SOIL_FLAG_MIPMAPS | SOIL_FLAG_DDS_LOAD_DIRECT ); } if( texture > 0 ) { // enable texturing glEnable( GL_TEXTURE_2D ); // bind an OpenGL texture ID glBindTexture( GL_TEXTURE_2D, texture ); std::cout << "the loaded texture ID was " << texture << std::endl; } else { glDisable( GL_TEXTURE_2D ); std::cout << "Texture loading failed: '" << SOIL_last_result() << "'" << std::endl; } } and how i apply it when drawing : GLuint TextureID = glGetUniformLocation(shaderProgram, "myTextureSampler"); if(!TextureID) cout << "TextureID not found ..." << endl; // glEnableVertexAttribArray(TextureID); glActiveTexture(GL_TEXTURE0); if(SFML) sf::Texture::bind(sfml_texture); else { glBindTexture (GL_TEXTURE_2D, texture); // glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 1024, 768, 0, GL_RGB, GL_UNSIGNED_BYTE, &texture); } glUniform1i(TextureID, 0); I am not sure that SOIL is adapted to my program as i want something as simple as possible (i used sfml's texture object which was the best but i can't anymore), but if i can get it to work it would be great.

    Read the article

  • Finalists for Community Manager of the Year Announced

    - by Mike Stiles
    For as long as brand social has been around, there’s still an amazing disparity from company to company on the role of Community Manager. At some brands, they are the lead social innovators. At others, the task has been relegated to interns who are at the company temporarily. Some have total autonomy and trust. Others must get chain-of-command permission each time they engage. So what does a premiere “worth their weight in gold” Community Manager look like? More than anyone else in the building, they have the most intimate knowledge of who the customer is. They live on the front lines and are the first to detect problems and opportunities. They are sincere, raving fans of the brand themselves and are trusted advocates for the others. They’re fun to be around. They aren’t salespeople. Give me one Community Manager who’s been at the job 6 months over 5 focus groups any day. Because not unlike in speed dating, they must immediately learn how to make a positive, lasting impression on fans so they’ll want to return and keep the relationship going. They’re informers and entertainers, with a true belief in the value of the brand’s proposition. Internally, they live at the mercy of the resources allocated toward social. Many, whose managers don’t understand the time involved in properly curating a community, are tasked with 2 or 3 too many of them. 63% of CM’s will spend over 30 hours a week on one community. They come to intuitively know the value of the relationships they’re building, even if they can’t always be shown in a bar graph to the C-suite. Many must communicate how the customer feels to executives that simply don’t seem to want to hear it. Some can get the answers fans want quickly, others are frustrated in their ability to respond within an impressive timeframe. In short, in a corporate world coping with sweeping technological changes, amidst business school doublespeak, pie charts, decks, strat sessions and data points, the role of the Community Manager is the most…human. They are the true emotional connection to the real life customer. Which is why we sought to find a way to recognize and honor who they are, what they do, and how well they have defined the position as social grows and integrates into the larger organization. Meet our 3 finalists for Community Manager of the Year. Jeff Esposito with VistaprintJeff manages and heads up content strategy for all social networks and blogs. He also crafts company-wide policies surrounding the social space. Vistaprint won the NEDMA Gold Award for Twitter Strategy in 2010 and 2011, and a Bronze in 2011 for Social Media Strategy. Prior to Vistaprint, Jeff was Media Relations Manager with the Long Island Ducks. He graduated from Seton Hall University with a BA in English and a minor in Classical Studies. Stacey Acevero with Vocus In addition to social management, Stacey blogs at Vocus on influential marketing and social media, and blogs at PRWeb on public relations and SEO. She’s been named one of the #Nifty50 Women in Tech on Twitter 2 years in a row, as well as included in the 15 up-and-coming PR pros to watch in 2012. Carly Severn with the San Francisco BalletCarly drives engagement, widens the fanbase and generates digital content for America’s oldest professional ballet company. Managed properties include Facebook, Twitter, Tumblr, Pinterest, Instagram, YouTube and G+. Prior to joining the SF Ballet, Carly was Marketing & Press Coordinator at The Fitzwilliam Museum at Cambridge, where she graduated with a degree in English. We invite you to join us at the first annual Oracle Social Media Summit November 14 and 15 at the Wynn in Las Vegas where our finalists will be featured. Over 300 top brand marketers, agency executives, and social leaders & innovators will be exploring how social is transforming business. Space is limited and the information valuable, so get more info and get registered as soon as possible at the event site.

    Read the article

  • CLSF & CLK 2013 Trip Report by Jeff Liu

    - by jamesmorris
    This is a contributed post from Jeff Liu, lead XFS developer for the Oracle mainline Linux kernel team. Recently, I attended both the China Linux Storage and Filesystem workshop (CLSF), and the China Linux Kernel conference (CLK), which were held in Shanghai. Here are the highlights for both events. CLSF - 17th October XFS update (led by Jeff Liu) XFS keeps rapid progress with a lot of changes, especially focused on the infrastructure/performance improvements as well as  new feature development.  This can be reflected with a sample statistics among XFS/Ext4+JBD2/Btrfs via: # git diff --stat --minimal -C -M v3.7..v3.12-rc4 -- fs/xfs|fs/ext4+fs/jbd2|fs/btrfs XFS: 141 files changed, 27598 insertions(+), 19113 deletions(-) Ext4+JBD2: 39 files changed, 10487 insertions(+), 5454 deletions(-) Btrfs: 70 files changed, 19875 insertions(+), 8130 deletions(-) What made up those changes in XFS? Self-describing metadata(CRC32c). This is a new feature and it contributed about 70% code changes, it can be enabled via `mkfs.xfs -m crc=1 /dev/xxx` for v5 superblock. Transaction log space reservation improvements. With this change, we can calculate the log space reservation at mount time rather than runtime to reduce the the CPU overhead. User namespace support. So both XFS and USERNS can be enabled on kernel configuration begin from Linux 3.10. Thanks Dwight Engen's efforts for this thing. Split project/group quota inodes. Originally, project quota can not be enabled with group quota at the same time because they were share the same quota file inode, now it works but only for v5 super block. i.e, CRC enabled. CONFIG_XFS_WARN, an new lightweight runtime debugger which can be deployed in production environment. Readahead log object recovery, this change can speed up the log replay progress significantly. Speculative preallocation inode tracking, clearing and throttling. The main purpose is to deal with inodes with post-EOF space due to speculative preallocation, support improved quota management to free up a significant amount of unwritten space when at or near EDQUOT. It support backgroup scanning which occurs on a longish interval(5 mins by default, tunable), and on-demand scanning/trimming via ioctl(2). Bitter arguments ensued from this session, especially for the comparison between Ext4 and Btrfs in different areas, I have to spent a whole morning of the 1st day answering those questions. We basically agreed on XFS is the best choice in Linux nowadays because: Stable, XFS has a good record in stability in the past 10 years. Fengguang Wu who lead the 0-day kernel test project also said that he has observed less error than other filesystems in the past 1+ years, I own it to the XFS upstream code reviewer, they always performing serious code review as well as testing. Good performance for large/small files, XFS does not works very well for small files has already been an old story for years. Best choice (maybe) for distributed PB filesystems. e.g, Ceph recommends delopy OSD daemon on XFS because Ext4 has limited xattr size. Best choice for large storage (>16TB). Ext4 does not support a single file more than around 15.95TB. Scalability, any objection to XFS is best in this point? :) XFS is better to deal with transaction concurrency than Ext4, why? The maximum size of the log in XFS is 2038MB compare to 128MB in Ext4. Misc. Ext4 is widely used and it has been proved fast/stable in various loads and scenarios, XFS just need more customers, and Btrfs is still on the road to be a manhood. Ceph Introduction (Led by Li Wang) This a hot topic.  Li gave us a nice introduction about the design as well as their current works. Actually, Ceph client has been included in Linux kernel since 2.6.34 and supported by Openstack since Folsom but it seems that it has not yet been widely deployment in production environment. Their major work is focus on the inline data support to separate the metadata and data storage, reduce the file access time, i.e, a file access need communication twice, fetch the metadata from MDS and then get data from OSD, and also, the small file access is limited by the network latency. The solution is, for the small files they would like to store the data at metadata so that when accessing a small file, the metadata server can push both metadata and data to the client at the same time. In this way, they can reduce the overhead of calculating the data offset and save the communication to OSD. For this feature, they have only run some small scale testing but really saw noticeable improvements. Test environment: Intel 2 CPU 12 Core, 64GB RAM, Ubuntu 12.04, Ceph 0.56.6 with 200GB SATA disk, 15 OSD, 1 MDS, 1 MON. The sequence read performance for 1K size files improved about 50%. I have asked Li and Zheng Yan (the core developer of Ceph, who also worked on Btrfs) whether Ceph is really stable and can be deployed at production environment for large scale PB level storage, but they can not give a positive answer, looks Ceph even does not spread over Dreamhost (subject to confirmation). From Li, they only deployed Ceph for a small scale storage(32 nodes) although they'd like to try 6000 nodes in the future. Improve Linux swap for Flash storage (led by Shaohua Li) Because of high density, low power and low price, flash storage (SSD) is a good candidate to partially replace DRAM. A quick answer for this is using SSD as swap. But Linux swap is designed for slow hard disk storage, so there are a lot of challenges to efficiently use SSD for swap. SWAPOUT swap_map scan swap_map is the in-memory data structure to track swap disk usage, but it is a slow linear scan. It will become a bottleneck while finding many adjacent pages in the use of SSD. Shaohua Li have changed it to a cluster(128K) list, resulting in O(1) algorithm. However, this apporoach needs restrictive cluster alignment and only enabled for SSD. IO pattern In most cases, the swap io is in interleaved pattern because of mutiple reclaimers or a free cluster is shared by all reclaimers. Even though block layer can merge interleaved IO to some extent, but we cannot count on it completely. Hence the per-cpu cluster is added base on the previous change, it can help reclaimer do sequential IO and the block layer will be easier to merge IO. TLB flush: If we're reclaiming one active page, we should first move the page from active lru list to inactive lru list, and then reclaim the page from inactive lru to swap it out. During the process, we need to clear PTE twice: first is 'A'(ACCESS) bit, second is 'P'(PRESENT) bit. Processors need to send lots of ipi which make the TLB flush really expensive. Some works have been done to improve this, including rework smp_call_functiom_many() or remove the first TLB flush in x86, but there still have some arguments here and only parts of works have been pushed to mainline. SWAPIN: Page fault does iodepth=1 sync io, but it's a little waste if only issue a page size's IO. The obvious solution is doing swap readahead. But the current in-kernel swap readahead is arbitary(always 8 pages), and it always doesn't perform well for both random and sequential access workload. Shaohua introduced a new flag for madvise(MADV_WILLNEED) to do swap prefetch, so the changes happen in userspace API and leave the in-kernel readahead unchanged(but I think some improvement can also be done here). SWAP discard As we know, discard is important for SSD write throughout, but the current swap discard implementation is synchronous. He changed it to async discard which allow discard and write run in the same time. Meanwhile, the unit of discard is also optimized to cluster. Misc: lock contention For many concurrent swapout and swapin , the lock contention such as anon_vma or swap_lock is high, so he changed the swap_lock to a per-swap lock. But there still have some lock contention in very high speed SSD because of swapcache address_space lock. Zproject (led by Bob Liu) Bob gave us a very nice introduction about the current memory compression status. Now there are 3 projects(zswap/zram/zcache) which all aim at smooth swap IO storm and promote performance, but they all have their own pros and cons. ZSWAP It is implemented based on frontswap API and it uses a dynamic allocater named Zbud to allocate free pages. Zbud means pairs of zpages are "buddied" and it can only store at most two compressed pages in one page frame, so the max compress ratio is 50%. Each page frame is lru-linked and can do shink in memory pressure. If the compressed memory pool reach its limitation, shink or reclaim happens. It decompress the page frame into two new allocated pages and then write them to real swap device, but it can fail when allocating the two pages. ZRAM Acts as a compressed ramdisk and used as swap device, and it use zsmalloc as its allocator which has high density but may have fragmentation issues. Besides, page reclaim is hard since it will need more pages to uncompress and free just one page. ZRAM is preferred by embedded system which may not have any real swap device. Now both ZRAM and ZSWAP are in driver/staging tree, and in the mm community there are some disscussions of merging ZRAM into ZSWAP or viceversa, but no agreement yet. ZCACHE Handles file page compression but it is removed out of staging recently. From industry (led by Tang Jie, LSI) An LSI engineer introduced several new produces to us. The first is raid5/6 cards that it use full stripe writes to improve performance. The 2nd one he introduced is SandForce flash controller, who can understand data file types (data entropy) to reduce write amplification (WA) for nearly all writes. It's called DuraWrite and typical WA is 0.5. What's more, if enable its Dynamic Logical Capacity function module, the controller can do data compression which is transparent to upper layer. LSI testing shows that with this virtual capacity enables 1x TB drive can support up to 2x TB capacity, but the application must monitor free flash space to maintain optimal performance and to guard against free flash space exhaustion. He said the most useful application is for datebase. Another thing I think it's worth to mention is that a NV-DRAM memory in NMR/Raptor which is directly exposed to host system. Applications can directly access the NV-DRAM via a memory address - using standard system call mmap(). He said that it is very useful for database logging now. This kind of NVM produces are beginning to appear in recent years, and it is said that Samsung is building a research center in China for related produces. IMHO, NVM will bring an effect to current os layer especially on file system, e.g. its journaling may need to redesign to fully utilize these nonvolatile memory. OCFS2 (led by Canquan Shen) Without a doubt, HuaWei is the biggest contributor to OCFS2 in the past two years. They have posted 46 upstream patches and 39 patches have been merged. Their current project is based on 32/64 nodes cluster, but they also tried 128 nodes at the experimental stage. The major work they are working is to support ATS (atomic test and set), it can be works with DLM at the same time. Looks this idea is inspired by the vmware VMFS locking, i.e, http://blogs.vmware.com/vsphere/2012/05/vmfs-locking-uncovered.html CLK - 18th October 2013 Improving Linux Development with Better Tools (Andi Kleen) This talk focused on how to find/solve bugs along with the Linux complexity growing. Generally, we can do this with the following kind of tools: Static code checkers tools. e.g, sparse, smatch, coccinelle, clang checker, checkpatch, gcc -W/LTO, stanse. This can help check a lot of things, simple mistakes, complex problems, but the challenges are: some are very slow, false positives, may need a concentrated effort to get false positives down. Especially, no static checker I found can follow indirect calls (“OO in C”, common in kernel): struct foo_ops { int (*do_foo)(struct foo *obj); } foo->do_foo(foo); Dynamic runtime checkers, e.g, thread checkers, kmemcheck, lockdep. Ideally all kernel code would come with a test suite, then someone could run all the dynamic checkers. Fuzzers/test suites. e.g, Trinity is a great tool, it finds many bugs, but needs manual model for each syscall. Modern fuzzers around using automatic feedback, but notfor kernel yet: http://taviso.decsystem.org/making_software_dumber.pdf Debuggers/Tracers to understand code, e.g, ftrace, can dump on events/oops/custom triggers, but still too much overhead in many cases to run always during debug. Tools to read/understand source, e.g, grep/cscope work great for many cases, but do not understand indirect pointers (OO in C model used in kernel), give us all “do_foo” instances: struct foo_ops { int (*do_foo)(struct foo *obj); } = { .do_foo = my_foo }; foo>do_foo(foo); That would be great to have a cscope like tool that understands this based on types/initializers XFS: The High Performance Enterprise File System (Jeff Liu) [slides] I gave a talk for introducing the disk layout, unique features, as well as the recent changes.   The slides include some charts to reflect the performances between XFS/Btrfs/Ext4 for small files. About a dozen users raised their hands when I asking who has experienced with XFS. I remembered that when I asked the same question in LinuxCon/Japan, only 3 people raised their hands, but they are Chris Mason, Ric Wheeler, and another attendee. The attendee questions were mainly focused on stability, and comparison with other file systems. Linux Containers (Feng Gao) The speaker introduced us that the purpose for those kind of namespaces, include mount/UTS/IPC/Network/Pid/User, as well as the system API/ABI. For the userspace tools, He mainly focus on the Libvirt LXC rather than us(LXC). Libvirt LXC is another userspace container management tool, implemented as one type of libvirt driver, it can manage containers, create namespace, create private filesystem layout for container, Create devices for container and setup resources controller via cgroup. In this talk, Feng also mentioned another two possible new namespaces in the future, the 1st is the audit, but not sure if it should be assigned to user namespace or not. Another is about syslog, but the question is do we really need it? In-memory Compression (Bob Liu) Same as CLSF, a nice introduction that I have already mentioned above. Misc There were some other talks related to ACPI based memory hotplug, smart wake-affinity in scheduler etc., but my head is not big enough to record all those things. -- Jeff Liu

    Read the article

  • could not bind socket while haproxy restart

    - by shreyas
    I m restarting HAproxy by following command haproxy -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid) but i get following message [ALERT] 183/225022 (9278) : Starting proxy appli1-rewrite: cannot bind socket [ALERT] 183/225022 (9278) : Starting proxy appli2-insert: cannot bind socket [ALERT] 183/225022 (9278) : Starting proxy appli3-relais: cannot bind socket [ALERT] 183/225022 (9278) : Starting proxy appli4-backup: cannot bind socket [ALERT] 183/225022 (9278) : Starting proxy ssl-relay: cannot bind socket [ALERT] 183/225022 (9278) : Starting proxy appli5-backup: cannot bind socket my haproxy.cfg file looks likefollowing global log 127.0.0.1 local0 log 127.0.0.1 local1 notice #log loghost local0 info maxconn 4096 #chroot /usr/share/haproxy user haproxy group haproxy daemon #debug #quiet defaults log global mode http option httplog option dontlognull retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen appli1-rewrite 0.0.0.0:10001 cookie SERVERID rewrite balance roundrobin server app1_1 192.168.34.23:8080 cookie app1inst1 check inter 2000 rise 2 fall 5 server app1_2 192.168.34.32:8080 cookie app1inst2 check inter 2000 rise 2 fall 5 server app1_3 192.168.34.27:8080 cookie app1inst3 check inter 2000 rise 2 fall 5 server app1_4 192.168.34.42:8080 cookie app1inst4 check inter 2000 rise 2 fall 5 listen appli2-insert 0.0.0.0:10002 option httpchk balance roundrobin cookie SERVERID insert indirect nocache server inst1 192.168.114.56:80 cookie server01 check inter 2000 fall 3 server inst2 192.168.114.56:81 cookie server02 check inter 2000 fall 3 capture cookie vgnvisitor= len 32 option httpclose # disable keep-alive rspidel ^Set-cookie:\ IP= # do not let this cookie tell our internal IP address listen appli3-relais 0.0.0.0:10003 dispatch 192.168.135.17:80 listen appli4-backup 0.0.0.0:10004 option httpchk /index.html option persist balance roundrobin server inst1 192.168.114.56:80 check inter 2000 fall 3 server inst2 192.168.114.56:81 check inter 2000 fall 3 backup listen ssl-relay 0.0.0.0:8443 option ssl-hello-chk balance source server inst1 192.168.110.56:443 check inter 2000 fall 3 server inst2 192.168.110.57:443 check inter 2000 fall 3 server back1 192.168.120.58:443 backup listen appli5-backup 0.0.0.0:10005 option httpchk * balance roundrobin cookie SERVERID insert indirect nocache server inst1 192.168.114.56:80 cookie server01 check inter 2000 fall 3 server inst2 192.168.114.56:81 cookie server02 check inter 2000 fall 3 server inst3 192.168.114.57:80 backup check inter 2000 fall 3 capture cookie ASPSESSION len 32 srvtimeout 20000 option httpclose # disable keep-alive option checkcache # block response if set-cookie & cacheable rspidel ^Set-cookie:\ IP= # do not let this cookie tell our internal IP address #errorloc 502 http://192.168.114.58/error502.html #errorfile 503 /etc/haproxy/errors/503.http errorfile 400 /etc/haproxy/errors/400.http errorfile 403 /etc/haproxy/errors/403.http errorfile 408 /etc/haproxy/errors/408.http errorfile 500 /etc/haproxy/errors/500.http errorfile 502 /etc/haproxy/errors/502.http errorfile 503 /etc/haproxy/errors/503.http errorfile 504 /etc/haproxy/errors/504.http what is wrong with my aproach

    Read the article

  • How to have Windows Server DNS use hosts file to resolve specific host names

    - by user41079
    Hello, everyone, I'm facing a small problem with Windows Server 2003 DNS service. In my corporation, I'm running Microsoft DNS server(172.16.0.12) to do name resolution to my company intranet(domain name ends in dev.nls. resolving to IP 172.16..), and it is also configured as a DNS forwarder to forward other domain names(e.g. *.google.com , *.sf.net) to Internet real DNS servers. This internal DNS server never tends to serve users from outside world. And, we are running a mail server(serving incoming mail for a real Internet domain @nlscan.com) inside company firewall which can be accessed in either way: by connecting to 172.16.0.10 from within intranet. by connecting to mail.nlscan.com(resolved to 202.101.116.9) from Internet. Note that 172.16.0.10 and 202.101.116.9 is not the same physical machine. The 202 one is a firewall machine who do port forwarding of port 25 and 110 to intranet address 172.16.0.10 . Now my question: If users inside corporate LAN want to resolve mail.nlscan.com, it resolves to 202.101.116.9. That's correct and workable, BUT NOT GOOD, because the mail traffic goes to the firewall machine then bounces to 172.16.0.10 . I hope that our internal DNS server can intercept the name mail.nlscan.com and resolve it to 172.16.0.10 . So, I hope that I can write an entry in "hosts" file on 172.16.0.12 to do this. But, how can Microsoft DNS server recognize this "hosts" file? Maybe you suggest, why not have intranet user use 172.16.0.10 to access my mail server? I have to say it is inconvenient, suppose a user(employee) works on his laptop, daytime in office and nighttime at home. When he is at home, he cannot use 172.16.0.10 . Creating a zone for nlscan.com on our internal DNS server is not feasible, because the name server for nlscan.com domain is on our ISP, and it is responsible for resolving other host names and sub-domains under nlscan.com . Thank you in advance.

    Read the article

  • mobile broadband recommendations for Lenovo T500

    - by Justin Grant
    I use a Lenovo T500 primarily in and around San Francisco, although I do some travel in the US on business and occasionally to Europe/Asia. I'm looking for a good mobile broadband option for my T500. I am admittedly baffled by the various mobile-broadband choices (3G vs. 4G, WiMax vs. LTE vs. MIMO vs. ..., etc.). My priorities are (in this order): Compatible with Lenovo T500 and Windows 7. I realize only the AT&T accessory card is listed on Lenovo's site, but I've also heard that other cards will work in my T500 too, like the WiMax/Wifi combo card-- so I'm interested in what actually works, not necessarily only what Lenovo is promoting. Reliable coverage in US large cities, especially the SF Bay Area. my IPhone has lousy coverage in many spots, so I'd be nervous about an AT&T 3G option unless the problem is with the IPhone and not AT&T's network. I'm OK with non-great coverage outside major US cities, since I don't do much travel in those areas. Speed. faster is better. Internal card. I'd slightly prefer something I could install inside my T500 instead of a dongle on the side that might break off, although this is my lowest priority so it's not a big deal. Price. I don't want to pay over $100/month. I've tried lots of Googling and haven't come up with clear answers. I've seen lots of general overviews without recommendations, and lots of passionate opinions which don't feel objective (and don't help me understand compatibility with my hardware & geography). Can you recommend a good, objective guide online, ideally for Lenovo although general guide is OK too, which can help me figure out which option is the best one for me? I'd also be interested in your own personal experiences of using mobile broadband using a Lenovo T500. I'll accept the answer which gets me closest to making a decision.

    Read the article

  • Netcat file transfer problem

    - by thepurplepixel
    I have two custom scripts I just wrote to facilitate transferring files between my VPS and my home server. They are both written in bash (short & sweet): To send: #!/bin/bash SENDFILE=$1 PORT=$2 HOST='<my house>' HOSTIP=`host $HOST | grep "has address" | cut --delimiter=" " -f 4` echo Transferring file \"$SENDFILE\" to $HOST \($HOSTIP\). tar -c "$SENDFILE" | pv -c -N tar -i 0.5 | lzma -z -c -6 | pv -c -N lzma -i 0.5 | nc -q 1 $HOSTIP $PORT echo Done. To receive: #!/bin/bash SERVER='<myserver>' SERVERIP=`host $SERVER | grep "has address" | cut --delimiter=" " -f 4` PORT=$1 echo Receiving file from $SERVER \($SERVERIP\) on port $PORT. nc -l $PORT | pv -c -N netcat -i 0.5 | lzma -d -c | pv -c -N lzma -i 0.5 | tar -xf - echo Done. The problem is that, for a very quick second, I see something flash along the lines of "Connection Refused" (before pv overwrites it), and no file is ever transferred. The port is forwarded through my router, and nmap confirms it: ~$ sudo nmap -sU -PN -p55515 -v <my house> Starting Nmap 5.00 ( http://nmap.org ) at 2010-04-21 18:10 EDT NSE: Loaded 0 scripts for scanning. Initiating Parallel DNS resolution of 1 host. at 18:10 Completed Parallel DNS resolution of 1 host. at 18:10, 0.00s elapsed Initiating UDP Scan at 18:10 Scanning 74.13.25.94 [1 port] Completed UDP Scan at 18:10, 2.02s elapsed (1 total ports) Host 74.13.25.94 is up. Interesting ports on 74.13.25.94: PORT STATE SERVICE 55515/udp open|filtered unknown Read data files from: /usr/share/nmap Nmap done: 1 IP address (1 host up) scanned in 2.08 seconds Raw packets sent: 2 (56B) | Rcvd: 5 (260B) Also, running netcat normally doesn't work either: squircle@summit:~$ netcat <my house> 55515 <my house> [<my IP>] 55515 (?) : Connection refused Both boxes are Ubuntu Karmic (9.10). The receiver has no firewall, and outbound traffic on that port is allowed on the sender. I have no idea what to troubleshoot next. Any ideas? P.S.: Feel free to move this to SO/SF if you feel it would fit better there.

    Read the article

  • git private server error: "Permission denied (publickey)."

    - by goddfree
    I followed the instructions here in order to set up a private git server on my Amazon EC2 instance. However, I am having problems when trying to SSH into the git account. Specifically, I get the error "Permission denied (publickey)." Here are the permissions of my files/folders on the EC2 server: drwx------ 4 git git 4096 Aug 13 19:52 /home/git/ drwx------ 2 git git 4096 Aug 13 19:52 /home/git/.ssh -rw------- 1 git git 400 Aug 13 19:51 /home/git/.ssh/authorized_keys Here are the permissions of my files/folders on my own computer: drwx------ 5 CYT staff 170 Aug 13 14:51 .ssh -rw------- 1 CYT staff 1679 Aug 13 13:53 .ssh/id_rsa -rw-r--r-- 1 CYT staff 400 Aug 13 13:53 .ssh/id_rsa.pub -rw-r--r-- 1 CYT staff 1585 Aug 13 13:53 .ssh/known_hosts When checking my logs in /var/log/secure, I used to get the following error message every time I tried to SSH: Authentication refused: bad ownership or modes for file /home/git/.ssh/authorized_keys However, after making a few permission changes, I no longer get this error message. Despite this, I am still getting the "Permission denied (publickey)." message every time I try to SSH. The command I am using to SSH is ssh -T git@my-ip. Here is the full log I get when I run ssh -vT [email protected]: OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: Connecting to my-ip [my-ip] port 22. debug1: Connection established. debug1: identity file /Users/CYT/.ssh/id_rsa type -1 debug1: identity file /Users/CYT/.ssh/id_rsa-cert type -1 debug1: identity file /Users/CYT/.ssh/id_dsa type -1 debug1: identity file /Users/CYT/.ssh/id_dsa-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.2 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.2 debug1: match: OpenSSH_6.2 pat OpenSSH* debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr [email protected] none debug1: kex: client->server aes128-ctr [email protected] none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA 08:ad:8a:bc:ab:4d:5f:73:24:b2:78:69:46:1a:a5:5a debug1: Host 'my-ip' is known and matches the RSA host key. debug1: Found key in /Users/CYT/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /Users/CYT/.ssh/id_rsa debug1: Trying private key: /Users/CYT/.ssh/id_dsa debug1: No more authentication methods to try. Permission denied (publickey). I have spent a few hours going through threads on various sites, including SO and SF, looking for a solution. It seems that the permissions for my files are all okay, but I just can't figure out the problem. Any help would be greatly appreciated. Edit: EEAA: Here are the outputs you requested: $ getent passwd git git:x:503:504::/home/git:/bin/bash $ grep ssh ~git/.ssh/authorized_keys | wc -l grep: /home/git/.ssh/authorized_keys: Permission denied 0

    Read the article

  • Introducing sp_ssiscatalog (v1.0.0.0)

    - by jamiet
    Regular readers of my blog may know that over the last year I have made available a suite of SQL Server Reporting Services (SSRS) reports that provide visualisations of the data in the SQL Server Integration Services (SSIS) 2012 Catalog. Those reports are available at http://ssisreportingpack.codeplex.com. As I have built these reports and used them myself on a real life project a couple of things have dawned on me: As soon as your SSIS Catalog gets a significant amount of data in it the performance of the reports degrades rapidly. This is hampered by the fact that there are limitations as to the SQL statements that I can embed within a SSRS report. SSIS professionals are data guys at heart and those types of people feel more comfortable in a query environment rather than having to go through the rigmarole of standing up a reporting server (well, I know I do anyway) Hence I have decided to take a different tack with the reporting pack. Taking my lead from Adam Machanic’s sp_whoisactive and Brent Ozar’s sp_blitz I have produced sp_ssiscatalog, a stored procedure that makes it easy to get at the crucial data in the SSIS Catalog. I will spend the rest of this blog explaining exactly what sp_ssiscatalog does and how to use it but if you would rather just download the bits yourself and start to play you can download v1.0.0.0 from DB v1.0.0.0. Usage Scenarios Most Recent Execution I find that the most frequent information that one needs to get from the SSIS Catalog is information pertaining to the most recent execution. Hence if you execute sp_ssiscatalog with no parameters, that is exactly what you will get. EXEC [dbo].[sp_ssiscatalog] This will return up to 5 resultsets: EXECUTION - Summary information about the execution including status, start time & end time EVENTS - All events that occurred during the execution OnError,OnTaskFailed - All events where event_name is either OnError or OnTaskFailed OnWarning - All events where event_name is OnWarning EXECUTABLE_STATS - Duration and execution result of every executable in the execution All 5 resultsets will be displayed if there is any data satisfying that resultset. In other words, if there are no (for example) OnWarning events then the OnWarning resultset will not be displayed. The display of these 5 resultsets can be toggled respectively by these 5 optional parameters (all of which are of type BIT): @exec_execution @exec_events @exec_errors @exec_warnings @exec_executable_stats Any Execution As just explained the default behaviour is to supply data for the most recent execution. If you wish to specify which execution the data should return data for simply supply the execution_id as a parameter: EXEC [dbo].[sp_ssiscatalog] 6 All Executions sp_ssiscatalog can also return information about all executions: EXEC [dbo].[sp_ssiscatalog] @operation_type='execs' The most recent execution will appear at the top. sp_ssiscatalog provides a number of parameters that enable you to filter the resultset: @execs_folder_name @execs_project_name @execs_package_name @execs_executed_as_name @execs_status_desc Some typical usages might be: //Return all failed executions EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_status_desc='failed' //Return all executions for a specified folder EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_folder_name='My folder' //Return all executions of a specified package in a specified project EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_project_name='My project', @execs_package_name='Pkg.dtsx' Installing sp_ssicatalog Under the covers sp_ssiscatalog actually calls many other stored procedures and functions hence creating it on your server is not simply a case of running a CREATE PROCEDURE script. I maintain the code in an SQL Server Data Tools (SSDT) database project which means that you have two ways of obtaining it. Download the source code You can download the latest (at the time of writing) source code from http://ssisreportingpack.codeplex.com/SourceControl/changeset/view/70192. Hit the download button to download all the source code in a zip file. The contents of that zip file will include an SSDT database project which you can open up in SSDT and publish just like any other SSDT database project. You can publish to a new database or any existing database, even [SSISDB] if you prefer. Download a dacpac Maintaining the code in an SSDT database project means that it can all get packaged up into a dacpac that you can then publish to your SQL Server. That dacpac is available from DB v1.0.0.0: Ordinarily a dacpac can be deployed to a SQL Server from SSMS using the Deploy Dacpac wizard however in this case there is a limitation. Due to sp_ssiscatalog referring to objects in the SSIS Catalog (which it has to do of course) the dacpac contains a SqlCmd variable to store the name of the database that underpins the SSIS Catalog; unfortunately the Deploy Dacpac wizard in SSMS has a rather gaping limitation in that it cannot deploy dacpacs containing SqlCmd variables. Hence, we can use the command-line tool, sqlpackage.exe, instead. Don’t worry if reverting to the command-line sounds a little daunting, I assure you it is not. Simply open a Visual Studio command-prompt and cd to the folder containing the downloaded dacpac: Type: "%PROGRAMFILES(x86)%\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe" /action:Publish /TargetDatabaseName:SsisReportingPack /SourceFile:SSISReportingPack.dacpac /Variables:SSISDB=SSISDB /TargetServerName:(local) or the shortened form: "%PROGRAMFILES(x86)%\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe" /a:Publish /tdn:SsisReportingPack /sf:SSISReportingPack.dacpac /v:SSISDB=SSISDB /tsn:(local) remembering to set your server name appropriately (here mine is set to “(local)” ). If everything works successfully you will see this: And you’re done! You’ll have a new database called [SsisReportingPack] which contains sp_ssiscatalog:   Good luck with sp_ssiscatalog. I have been using it extensively on my own projects recently and it has proved to be very useful indeed. Rest-assured however, I will be adding many new capabilities in the future. Feedback is welcome. @Jamiet

    Read the article

  • Salesforce/PHP - Bulk Outbound message (SOAP), Time out issue

    - by Phill Pafford
    Salesforce can send up to 100 requests inside 1 SOAP message. While sending this type of Bulk Ooutbound message request my PHP script finishes executing but SF fails to accept the ACK used to clear the message queue on the Salesforce side of things. Looking at the Outbound message log (monitoring) I see all the messages in a pending state with the Delivery Failure Reason "java.net.SocketTimeoutException: Read timed out". If my script has finished execution, why do I get this error? I have tried these methods to increase the execution time on my server as I have no access on the Salesforce side: set_time_limit(0); // in the script max_execution_time = 360 ; Maximum execution time of each script, in seconds max_input_time = 360 ; Maximum amount of time each script may spend parsing request data memory_limit = 32M ; Maximum amount of memory a script may consume I used the high settings just for testing. Any thoughts as to why this is failing the ACK delivery back to Salesforce? Here is some of the code: This is how I accept and send the ACK file for the imcoming SOAP request $data = 'php://input'; $content = file_get_contents($data); if($content) { respond('true'); } else { respond('false'); } The respond function function respond($tf) { $ACK = <<<ACK <?xml version = "1.0" encoding = "utf-8"?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <soapenv:Body> <notifications xmlns="http://soap.sforce.com/2005/09/outbound"> <Ack>$tf</Ack> </notifications> </soapenv:Body> </soapenv:Envelope> ACK; print trim($ACK); } These are in a generic script that I include into the script that uses the data for a specific workflow. I can process about 25 requests (That are in 1 SOAP response) but once I go over that I get the timeout error in the Salesforce queue. for 50 requests is usually takes my PHP script 86.77 seconds. Could it be Apache? PHP? I have also tested just accepting the 100 request SOAP response and just accepting and sending the ACK the queue clears out, so I know it's on my side of things. I show no errors in the apache log, the script runs fine. Thanks for any insight into this, --Phill

    Read the article

  • :: Help Needed to parse ksoap response using J2ME ::

    - by Sachin
    Hi Guys, I am developing a mobile application using J2ME, LWUIT and KSOAP. The application makes .net webservice calls and fetches responses. I am able to successfully make calls and receive respone, but not able to parse the response, due to my limited knowledge in java. following is my WSDL file and j2me code snippet used to make calls. The WSDL file has complex and SIMPLETYPE elements, which needs to be mapped to JAVA classes. i request you guys to help me out with any pointers or sample code. WSDL file: <?xml version="1.0" encoding="utf-8"?> <wsdl:definitions xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:tm="http://microsoft.com/wsdl/mime/textMatching/" xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/" xmlns:mime="http://schemas.xmlsoap.org/wsdl/mime/" xmlns:tns="http://tempuri.org/" xmlns:s="http://www.w3.org/2001/XMLSchema" xmlns:soap12="http://schemas.xmlsoap.org/wsdl/soap12/" xmlns:http="http://schemas.xmlsoap.org/wsdl/http/" targetNamespace="http://tempuri.org/" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/"> <wsdl:types> <s:schema elementFormDefault="qualified" targetNamespace="http://tempuri.org/"> <s:element name="Login"> <s:complexType> <s:sequence> <s:element minOccurs="0" maxOccurs="1" name="userLoginID" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="password" type="s:string" /> </s:sequence> </s:complexType> </s:element> <s:element name="LoginResponse"> <s:complexType> <s:sequence> <s:element minOccurs="1" maxOccurs="1" name="User" nillable="true" type="tns:UserBin" /> </s:sequence> </s:complexType> </s:element> <s:complexType name="UserBin" abstract="true"> <s:sequence> <s:element minOccurs="1" maxOccurs="1" name="CompanyCodeSeqId" type="s:int" /> <s:element minOccurs="0" maxOccurs="1" name="Image" type="s:base64Binary" /> <s:element minOccurs="1" maxOccurs="1" name="DateOfBirth" type="s:dateTime" /> <s:element minOccurs="1" maxOccurs="1" name="UserSeqId" type="s:int" /> <s:element minOccurs="0" maxOccurs="1" name="UserFirstName" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="UserLastName" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="PassWord" type="s:string" /> <s:element minOccurs="1" maxOccurs="1" name="UserRole" type="tns:Roles" /> <s:element minOccurs="1" maxOccurs="1" name="UserSSN" type="s:int" /> <s:element minOccurs="0" maxOccurs="1" name="EmailId" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="MobileNumber" type="s:string" /> <s:element minOccurs="1" maxOccurs="1" name="CreatedDate" type="s:dateTime" /> <s:element minOccurs="1" maxOccurs="1" name="ModifiedDate" type="s:dateTime" /> <s:element minOccurs="1" maxOccurs="1" name="UserGroup" type="tns:UserGroups" /> <s:element minOccurs="1" maxOccurs="1" name="SecretQuestionID" type="s:int" /> <s:element minOccurs="0" maxOccurs="1" name="SecretAnswer" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="WorkPhone" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="HomePhone" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="Company" type="s:string" /> <s:element minOccurs="1" maxOccurs="1" name="PreviousLoginTime" type="s:dateTime" /> <s:element minOccurs="1" maxOccurs="1" name="LoginTime" type="s:dateTime" /> </s:sequence> </s:complexType> <s:simpleType name="Roles"> <s:restriction base="s:string"> <s:enumeration value="Guest" /> <s:enumeration value="Customer" /> <s:enumeration value="Driver" /> <s:enumeration value="Dispatcher" /> <s:enumeration value="CompanyCodeAdmin" /> </s:restriction> </s:simpleType> <s:simpleType name="UserGroups"> <s:restriction base="s:string"> <s:enumeration value="Invalid" /> <s:enumeration value="Customer" /> <s:enumeration value="Driver" /> <s:enumeration value="Dispatcher" /> </s:restriction> </s:simpleType> <s:complexType name="DriverBin"> <s:complexContent mixed="false"> <s:extension base="tns:UserBin"> <s:sequence> <s:element minOccurs="1" maxOccurs="1" name="DriverGroupId" type="s:int" /> <s:element minOccurs="1" maxOccurs="1" name="DriverTypeId" type="s:int" /> <s:element minOccurs="1" maxOccurs="1" name="HireDate" type="s:dateTime" /> <s:element minOccurs="0" maxOccurs="1" name="LicenceNumber" type="s:string" /> <s:element minOccurs="1" maxOccurs="1" name="ExpiryDateForLicence" type="s:dateTime" /> <s:element minOccurs="0" maxOccurs="1" name="VehicleNumber" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="EmergencyName" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="EmergencyPhone" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="EmergencyAddress" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="EmergencyRelationship" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="DriverType" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="DriverGroupName" type="s:string" /> <s:element minOccurs="1" maxOccurs="1" name="VehicleID" type="s:int" /> <s:element minOccurs="0" maxOccurs="1" name="SocialSN" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="StreetAddress" type="s:string" /> <s:element minOccurs="1" maxOccurs="1" name="City" type="s:int" /> <s:element minOccurs="1" maxOccurs="1" name="State" type="s:int" /> <s:element minOccurs="0" maxOccurs="1" name="Zip" type="s:string" /> <s:element minOccurs="1" maxOccurs="1" name="EmergencyCity" type="s:int" /> <s:element minOccurs="1" maxOccurs="1" name="EmergencyState" type="s:int" /> <s:element minOccurs="0" maxOccurs="1" name="EmergencyZip" type="s:string" /> <s:element minOccurs="1" maxOccurs="1" name="TerminationDate" type="s:dateTime" /> <s:element minOccurs="1" maxOccurs="1" name="HireAgainFlag" type="s:boolean" /> <s:element minOccurs="0" maxOccurs="1" name="TerminationReason" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="Notes" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="ImageName" type="s:string" /> </s:sequence> </s:extension> </s:complexContent> </s:complexType> <s:complexType name="CustomerBin"> <s:complexContent mixed="false"> <s:extension base="tns:UserBin"> <s:sequence> <s:element minOccurs="0" maxOccurs="1" name="PassengesDetails" type="tns:ArrayOfPassengerBin" /> <s:element minOccurs="0" maxOccurs="1" name="CompanyName" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="CreditCardDetailsArray" type="tns:ArrayOfCreditCardDetailsBin" /> <s:element minOccurs="0" maxOccurs="1" name="AddressArray" type="tns:ArrayOfAddressBin" /> <s:element minOccurs="1" maxOccurs="1" name="CustomerCompanyID" type="s:int" /> <s:element minOccurs="1" maxOccurs="1" name="CustomerType" type="tns:CustomerType" /> <s:element minOccurs="0" maxOccurs="1" name="PassengerGradeName" type="s:string" /> <s:element minOccurs="1" maxOccurs="1" name="PassengerGradeID" type="s:int" /> </s:sequence> </s:extension> </s:complexContent> </s:complexType> <s:complexType name="ArrayOfPassengerBin"> <s:sequence> <s:element minOccurs="0" maxOccurs="unbounded" name="PassengerBin" nillable="true" type="tns:PassengerBin" /> </s:sequence> </s:complexType> <s:complexType name="PassengerBin"> <s:sequence> <s:element minOccurs="1" maxOccurs="1" name="CustomerSeqID" type="s:int" /> <s:element minOccurs="0" maxOccurs="1" name="EmailID" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="PhoneNumber" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="LastName" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="FirstName" type="s:string" /> <s:element minOccurs="1" maxOccurs="1" name="PassengerSeqID" nillable="true" type="s:int" /> <s:element minOccurs="1" maxOccurs="1" name="IsSelf" type="s:boolean" /> </s:sequence> </s:complexType> <s:complexType name="ArrayOfCreditCardDetailsBin"> <s:sequence> <s:element minOccurs="0" maxOccurs="unbounded" name="CreditCardDetailsBin" nillable="true" type="tns:CreditCardDetailsBin" /> </s:sequence> </s:complexType> <s:complexType name="CreditCardDetailsBin"> <s:sequence> <s:element minOccurs="1" maxOccurs="1" name="CardSeqID" nillable="true" type="s:int" /> <s:element minOccurs="1" maxOccurs="1" name="ExpiryYear" type="s:int" /> <s:element minOccurs="1" maxOccurs="1" name="ExpiryMonth" type="s:int" /> <s:element minOccurs="0" maxOccurs="1" name="CardType" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="NickName" type="s:string" /> <s:element minOccurs="1" maxOccurs="1" name="CVVNumber" type="s:int" /> <s:element minOccurs="0" maxOccurs="1" name="CreditCardNumber" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="NameOnTheCard" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="ZipCode" type="s:string" /> <s:element minOccurs="1" maxOccurs="1" name="IsPrimary" type="s:boolean" /> </s:sequence> </s:complexType> <s:complexType name="ArrayOfAddressBin"> <s:sequence> <s:element minOccurs="0" maxOccurs="unbounded" name="AddressBin" nillable="true" type="tns:AddressBin" /> </s:sequence> </s:complexType> <s:complexType name="AddressBin"> <s:sequence> <s:element minOccurs="1" maxOccurs="1" name="UserSeqID" type="s:int" /> <s:element minOccurs="1" maxOccurs="1" name="AddressID" nillable="true" type="s:int" /> <s:element minOccurs="1" maxOccurs="1" name="ZipCode" type="s:int" /> <s:element minOccurs="1" maxOccurs="1" name="IsPrimary" type="s:boolean" /> <s:element minOccurs="0" maxOccurs="1" name="State" type="s:string" /> <s:element minOccurs="1" maxOccurs="1" name="StateID" type="s:int" /> <s:element minOccurs="0" maxOccurs="1" name="StateCode" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="City" type="s:string" /> <s:element minOccurs="1" maxOccurs="1" name="CityID" type="s:int" /> <s:element minOccurs="0" maxOccurs="1" name="StreetAddress" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="NickName" type="s:string" /> </s:sequence> </s:complexType> <s:simpleType name="CustomerType"> <s:restriction base="s:string"> <s:enumeration value="Individual" /> <s:enumeration value="Corporate" /> </s:restriction> </s:simpleType> <s:complexType name="DispatcherBin"> <s:complexContent mixed="false"> <s:extension base="tns:UserBin"> <s:sequence> <s:element minOccurs="0" maxOccurs="1" name="Address1" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="Address2" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="City" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="Province" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="ZipCode" type="s:string" /> <s:element minOccurs="1" maxOccurs="1" name="IsActive" type="s:boolean" /> <s:element minOccurs="1" maxOccurs="1" name="DispatcherHireDate" type="s:dateTime" /> <s:element minOccurs="1" maxOccurs="1" name="DispatcherSSN" type="s:int" /> <s:element minOccurs="1" maxOccurs="1" name="TerminationDate" type="s:dateTime" /> <s:element minOccurs="0" maxOccurs="1" name="ReasonForTermination" type="s:string" /> <s:element minOccurs="1" maxOccurs="1" name="HireAgainFlag" type="s:boolean" /> <s:element minOccurs="0" maxOccurs="1" name="EmergencyContactName" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="EmergencyContactNumber" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="EmergencyContactAddress" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="EmergencyContactRelationship" type="s:string" /> <s:element minOccurs="1" maxOccurs="1" name="HireDate" type="s:dateTime" /> </s:sequence> </s:extension> </s:complexContent> </s:complexType> <s:element name="Logout"> <s:complexType> <s:sequence> <s:element minOccurs="0" maxOccurs="1" name="userLoginID" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="password" type="s:string" /> <s:element minOccurs="1" maxOccurs="1" name="userSeqID" type="s:int" /> <s:element minOccurs="0" maxOccurs="1" name="validationKey" type="s:string" /> </s:sequence> </s:complexType> </s:element> <s:element name="LogoutResponse"> <s:complexType /> </s:element> </s:schema> </wsdl:types> <wsdl:message name="LoginSoapIn"> <wsdl:part name="parameters" element="tns:Login" /> </wsdl:message> <wsdl:message name="LoginSoapOut"> <wsdl:part name="parameters" element="tns:LoginResponse" /> </wsdl:message> <wsdl:message name="LogoutSoapIn"> <wsdl:part name="parameters" element="tns:Logout" /> </wsdl:message> <wsdl:message name="LogoutSoapOut"> <wsdl:part name="parameters" element="tns:LogoutResponse" /> </wsdl:message> <wsdl:portType name="AccountManagementSoap"> <wsdl:operation name="Login"> <wsdl:input message="tns:LoginSoapIn" /> <wsdl:output message="tns:LoginSoapOut" /> </wsdl:operation> <wsdl:operation name="Logout"> <wsdl:input message="tns:LogoutSoapIn" /> <wsdl:output message="tns:LogoutSoapOut" /> </wsdl:operation> </wsdl:portType> <wsdl:binding name="AccountManagementSoap" type="tns:AccountManagementSoap"> <soap:binding transport="http://schemas.xmlsoap.org/soap/http" /> <wsdl:operation name="Login"> <soap:operation soapAction="http://tempuri.org/Login" style="document" /> <wsdl:input> <soap:body use="literal" /> </wsdl:input> <wsdl:output> <soap:body use="literal" /> </wsdl:output> </wsdl:operation> <wsdl:operation name="Logout"> <soap:operation soapAction="http://tempuri.org/Logout" style="document" /> <wsdl:input> <soap:body use="literal" /> </wsdl:input> <wsdl:output> <soap:body use="literal" /> </wsdl:output> </wsdl:operation> </wsdl:binding> <wsdl:binding name="AccountManagementSoap12" type="tns:AccountManagementSoap"> <soap12:binding transport="http://schemas.xmlsoap.org/soap/http" /> <wsdl:operation name="Login"> <soap12:operation soapAction="http://tempuri.org/Login" style="document" /> <wsdl:input> <soap12:body use="literal" /> </wsdl:input> <wsdl:output> <soap12:body use="literal" /> </wsdl:output> </wsdl:operation> <wsdl:operation name="Logout"> <soap12:operation soapAction="http://tempuri.org/Logout" style="document" /> <wsdl:input> <soap12:body use="literal" /> </wsdl:input> <wsdl:output> <soap12:body use="literal" /> </wsdl:output> </wsdl:operation> </wsdl:binding> <wsdl:service name="AccountManagement"> <wsdl:port name="AccountManagementSoap" binding="tns:AccountManagementSoap"> <soap:address location="http://webservice.mcubeit.com/trs_webservice/services/AccountManagement.asmx" /> </wsdl:port> <wsdl:port name="AccountManagementSoap12" binding="tns:AccountManagementSoap12"> <soap12:address location="http://webservice.mcubeit.com/trs_webservice/services/AccountManagement.asmx" /> </wsdl:port> </wsdl:service> </wsdl:definitions> J2ME Code Snippet: String uname = username.getText(); String pass = password.getText(); String serviceUrl = "http://xxx.xxx.xxx/webservice/services/AccountManagement.asmx"; String serviceNameSpace = "http://tempuri.org/"; String soapAction = "http://tempuri.org/Login"; String methodName = "Login"; SoapObject rpc = new SoapObject(serviceNameSpace, methodName); rpc.addProperty("userLoginID", uname.trim()); rpc.addProperty("password", pass.trim()); //rpc.addProperty("userSeqID", String.valueOf(192).toString()); SoapSerializationEnvelope envelope = new SoapSerializationEnvelope(SoapEnvelope.VER11); envelope.bodyOut = rpc; envelope.dotNet = true; envelope.encodingStyle = SoapSerializationEnvelope.ENC; HttpTransport ht = new HttpTransport(serviceUrl); ht.debug = true; ht.setXmlVersionTag("<?xml version=\"1.0\" encoding=\"UTF-8\"?>"); String result = null; try { ht.call(soapAction, envelope); result = (envelope.getResponse()).toString(); System.out.println("Result :" + result.toString()); } catch (org.xmlpull.v1.XmlPullParserException ex2) { System.out.println("XmlPullParserException :" + ex2.toString()); System.out.println("Request \n" + ht.requestDump); System.out.println("Response \n" + ht.responseDump); } catch (SoapFault sf) { System.out.println("SoapFault :" + sf.faultstring); System.out.println("Request \n" + ht.requestDump); System.out.println("Response \n" + ht.responseDump); } catch (IOException ioe) { System.out.println("IOException :" + ioe.toString()); System.out.println("Request \n" + ht.requestDump); System.out.println("Response \n" + ht.responseDump); } RESPONSE Result :CustomerBin{CompanyCodeSeqId=-1; Image=AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==; DateOfBirth=1900-01-01T00:00:00; UserSeqId=192; UserFirstName=Sachin; UserLastName=Nevase; PassWord=anyType{}; UserRole=Customer; UserSSN=-2147483648; [email protected]; MobileNumber=804131244; CreatedDate=1900-01-01T00:00:00; ModifiedDate=1900-01-01T00:00:00; UserGroup=Customer; SecretQuestionID=-2147483648; SecretAnswer=anyType{}; WorkPhone=anyType{}; HomePhone=anyType{}; Company=anyType{}; PreviousLoginTime=2010-05-04T23:38:34; LoginTime=1900-01-01T00:00:00; PassengesDetails=anyType{PassengerBin=anyType{CustomerSeqID=192; [email protected]; PhoneNumber=0804131244; LastName=Nevase; FirstName=Sachin; PassengerSeqID=55; IsSelf=true; }; }; CustomerCompanyID=-1; CustomerType=Individual; PassengerGradeName=Grade1; PassengerGradeID=1; } Thanks, Sachin

    Read the article

  • z-index issues with jQuery Tabs, Superfish Menu

    - by NightMICU
    Hi all, For the life of me I cannot get my Superfish menu to stop hiding behind my jQuery UI tabs in IE 7. I have read the documentation out there, have tried changing z-index values and tried the bgIframe plugin, although I am not sure if I am implementing it correctly (left out in my example below, using Supersubs). Here is the Javascript I am using for Superfish - using the Supersubs plugin: $(document).ready(function() { $("ul.sf-menu").supersubs({ minWidth: 12, // minimum width of sub-menus in em units maxWidth: 27, // maximum width of sub-menus in em units extraWidth: 1 // extra width can ensure lines don't sometimes turn over // due to slight rounding differences and font-family }).superfish({ delay: 1000, // one second delay on mouseout animation: {opacity:'show',height:'show'}, // fade-in and slide-down animation speed: 'medium' // faster animation speed }); }); And here is the structure of my page: <div id="page-container"> <div id="header"></div> <div id="nav-admin"> <!-- This is where Superfish goes --> </div> <div id="header-shadow"></div> <div id="content"> <div id="admin-main"> <div id="tabs"> <ul> <li><a href="#tabs-1">Tab 1</a></li> <li><a href="#tabs-2">Tab 2</a></li> </ul> <div id="tabs-1"> <!-- Content for Tab 1 --> </div> <div id="tabs-2"> <!-- Content for Tab 2 --> </div> </div> </div> </div> <div id="footer-shadow"></div> <div id="footer"> <div id="alt-nav"> <?php include $_SERVER['DOCUMENT_ROOT'] . '/includes/altnav.inc.php'; //CHANGE WHEN UPLOADED TO MATCH DOCUMENT ROOT ?> </div> </div> </div>

    Read the article

  • Salesforce consuming XML and display data in Visualforce report

    - by JavaKungFu
    Firstly, this question requires a bit of introduction so please bear with me. The high level is that I am connecting to a outside web service which will return some XML to my apex controller. The idea is that I want to display the XML returned into a nice tabular format in a VisualForce page. The format of the XML coming back will look something like this: <Wrapper><reportTable name='table_id' title='Report Title'> <row> <Element1><![CDATA[campaign_id]]></Element1> <Element2><![CDATA[577373]]></Element2> <Element3><![CDATA[4129]]></Element3> <Element4 dataFormat='2' dataSuffix='%'><![CDATA[0.7151]]></Element4> <Element5><![CDATA[2010-04-04]]></Element5> <Element6><![CDATA[2010-05-03]]></Element6> </row> </reportTable> ... Now currently I am using the XMLdom utility class (developed by SF for XML functions) to map this data into a custom object "reportTable" which contains a list of "row" custom objects. The reason I am building it out this way is because I don't know how many elements will be in each row, nor the number of rows. The Visualforce page looks something like this: <table><apex:repeat value="{!reportTables}" var="table"> <apex:repeat value="{!table.rows}" var="row"> <tr> <apex:repeat value="{!row.ColumnValue}" var="column"> <apex:repeat value="{!column}" var="value"> <td> <apex:outputText value="{!value}" /> </td> </apex:repeat> </apex:repeat> </tr> </apex:repeat> Questions are: 1) Does this seem like a good approach to the problem? 2) Is there a simpler/better way to consume the XML besides writing my own custom objects to map VF to? Open to any and all suggestions. I really hope there is a better way than building the HTML table myself, as then I also have to deal with styling and alignment etc. Thanks.

    Read the article

  • How to use second level cache for lazy loaded collections in Hibernate?

    - by Chandru
    Let's say I have two entities, Employee and Skill. Every employee has a set of skills. Now when I load the skills lazily through the Employee instances the cache is not used for skills in different instances of Employee. Let's Consider the following data set. Employee - 1 : Java, PHP Employee - 2 : Java, PHP When I load Employee - 2 after Employee - 1, I do not want hibernate to hit the database to get the skills and instead use the Skill instances already available in cache. Is this possible? If so how? Hibernate Configuration <session-factory> <property name="hibernate.connection.driver_class">com.mysql.jdbc.Driver</property> <property name="hibernate.connection.password">pass</property> <property name="hibernate.connection.url">jdbc:mysql://localhost/cache</property> <property name="hibernate.connection.username">root</property> <property name="hibernate.dialect">org.hibernate.dialect.MySQLInnoDBDialect</property> <property name="hibernate.cache.use_second_level_cache">true</property> <property name="hibernate.cache.use_query_cache">true</property> <property name="hibernate.cache.provider_class">net.sf.ehcache.hibernate.EhCacheProvider</property> <property name="hibernate.hbm2ddl.auto">update</property> <property name="hibernate.show_sql">true</property> <mapping class="org.cache.models.Employee" /> <mapping class="org.cache.models.Skill" /> </session-factory> The Entities with imports, getters and setters Removed @Entity @Table(name = "employee") @Cache(usage = CacheConcurrencyStrategy.READ_WRITE) public class Employee { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private int id; private String name; public Employee() { } @ManyToMany @JoinTable(name = "employee_skills", joinColumns = @JoinColumn(name = "employee_id"), inverseJoinColumns = @JoinColumn(name = "skill_id")) @Cache(usage = CacheConcurrencyStrategy.READ_WRITE) private List<Skill> skills; } @Entity @Table(name = "skill") @Cache(usage = CacheConcurrencyStrategy.READ_WRITE) public class Skill { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private int id; private String name; } SQL for Loading the Second Employee and his Skills Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from employee employee0_ where employee0_.id=? Hibernate: select skills0_.employee_id as employee1_1_, skills0_.skill_id as skill2_1_, skill1_.id as id1_0_, skill1_.name as name1_0_ from employee_skills skills0_ left outer join skill skill1_ on skills0_.skill_id=skill1_.id where skills0_.employee_id=? In that I specifically want to avoid the second query as the first one is unavoidable anyway.

    Read the article

  • Saxon XSLT-Transformation: How to change serialization of an empty tag from <x/> to <x></x>?

    - by Ben
    Hello folks! I do some XSLT-Transformation using Saxon HE 9.2 with the output later being unmarshalled by Castor 1.3.1. The whole thing runs with Java at the JDK 6. My XSLT-Transformation looks like this: <xsl:transform version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:ns="http://my/own/custom/namespace/for/the/target/document"> <xsl:output method="xml" encoding="UTF-8" indent="no" /> <xsl:template match="/"> <ns:item> <ns:property name="id"> <xsl:value-of select="/some/complicated/xpath" /> </ns:property> <!-- ... more ... --> </xsl:template> So the thing is: if the XPath-expression /some/complicated/xpath evaluates to an empty sequence, the Saxon serializer writes <ns:property/> instead of <ns:property></ns:property>. This, however, confuses the Castor unmarshaller, which is next in the pipeline and which unmarshals the output of the transformation to instances of XSD-generated Java-code. So my question is: How can I tell the Saxon-serializer to output empty tags not as standalone tags? Here is what I am proximately currently doing to execute the transformation: import net.sf.saxon.s9api.*; import javax.xml.transform.*; import javax.xml.transform.sax.SAXSource; // ... // read data XMLReader xmlReader = XMLReaderFactory.createXMLReader(); // ... there is some more setting up the xmlReader here ... InputStream xsltStream = new FileInputStream(xsltFile); InputStream inputStream = new FileInputStream(inputFile); Source xsltSource = new SAXSource(xmlReader, new InputSource(xsltStream)); Source inputSource = new SAXSource(xmlReader, new InputSource(inputStream)); XdmNode input = processor.newDocumentBuilder().build(inputSource); // initialize transformation configuration Processor processor = new Processor(false); XsltCompiler compiler = processor.newXsltCompiler(); compiler.setErrorListener(this); XsltExecutable executable = compiler.compile(xsltSource); Serializer serializer = new Serializer(); serializer.setOutputProperty(Serializer.Property.METHOD, "xml"); serializer.setOutputProperty(Serializer.Property.INDENT, "no"); serializer.setOutputStream(output); // execute transformation XsltTransformer transformer = executable.load(); transformer.setInitialContextNode(input); transformer.setErrorListener(this); transformer.setDestination(serializer); transformer.setSchemaValidationMode(ValidationMode.STRIP); transformer.transform(); I'd appreciate any hint pointing in the direction of a solution. :-) In case of any unclarity I'd be happy to give more details. Nightly greetings from Germany, Benjamin

    Read the article

  • Set the Dropdown box as selected in Javascript

    - by Aruna
    I am using Javascript in my app. In my table, I have a column named industry which contains value like id 69 name :aruna username :aruna email :[email protected] password: bd09935e199eab3d48b306d181b4e3d1:i0ODTgVXbWyP1iFOV type : 3 industry: | Insurance | Analytics | EIS and Process Engineering actually this industry value is inserted from a dropdown box multi select.. now i am trying like on load to make my form as to contain these values where industry is dropdown box <select id="ind1" moslabel="Industry" onClick="industryfn();"mosreq="0" multiple="multiple" size="3" class="inputbox" name="industry1[]">'+ <option value="Banking and Financial Services">Banking and Financial Services</option> <option value="Insurance">Insurance</option> <option value="Telecom">Telecom</option> <option value="Government ">Government </option> <option value="Healthcare &amp; Life sciences">Healthcare & Life sciences</option> <option value="Energy">Energy</option> <option value="Retail &amp;Consumer products">Retail &Consumer products</option> <option value="Energy, resources &amp; utilities">Energy, resources & utilities</option> <option value="Travel and Hospitality">Travel and Hospitality</option> <option value="Manufacturing">Manufacturing</option> <option value="High Tech">High Tech</option> <option value="Media and Information Services">Media and Information Services</option> </select> How to keep the industry values(| Insurance | Analytics | EIS and Process Engineering ) as selected? EDIT: Window.onDomReady(function(){ user-get('industry'); $s=explode('|', $str) ? var selectedFields = new Array(); <?php for($i=1;$i<count($s);$i++){?> selectedFields.push("<?php echo $s[$i];?>"); <?php }?> for(i=1;i<selectedFields.length;i++) { var select=selectedFields[i]; for (var ii = 0; ii < document.getElementById('ind1').length; ii++) { var value=document.getElementById('ind1').options[ii].value; alert(value); alert(select); if(value==select) { document.getElementById('ind1').options[ii].selected=selected; }//If } //inner For }//outer For </script> i have tried the above the alert functions are working correctly. But the if loop didnt works correctly .. Why so ..Please help me....

    Read the article

  • Generate MetaData with ANT

    - by Neil Foley
    I have a folder structure that contains multiple javascript files, each of these files need a standard piece of text at the top = //@include "includes.js" Each folder needs to contain a file named includes.js that has an include entry for each file in its directory and and entry for the include file in its parent directory. I'm trying to achive this using ant and its not going too well. So far I have the following, which does the job of inserting the header but not without actually moving or copying the file. I have heard people mentioning the <replace> task to do this but am a bit stumped. <?xml version="1.0" encoding="UTF-8"?> <project name="JavaContentAssist" default="start" basedir="."> <taskdef resource="net/sf/antcontrib/antcontrib.properties"> <classpath> <pathelement location="C:/dr_workspaces/Maven Repository/.m2/repository/ant-contrib/ant-contrib/20020829/ant-contrib-20020829.jar"/> </classpath> </taskdef> <target name="start"> <foreach target="strip" param="file"> <fileset dir="${basedir}"> <include name="**/*.js"/> <exclude name="**/includes.js"/> </fileset> </foreach> </target> <target name="strip"> <move file="${file}" tofile="${a_location}" overwrite="true"> <filterchain> <striplinecomments> <comment value="//@" /> </striplinecomments> <concatfilter prepend="${basedir}/header.txt"> </concatfilter> </filterchain> </move> </target> </project> As for the generation of the include files in the dir I'm not sure where to start at all. I'd appreciate if somebody could point me in the right direction.

    Read the article

  • Running a graph returns E_FAIL

    - by Manish
    Hi, I have been struggling for a while now to get my filter graph to run .I am trying to crop a .wmv file into smaller duration .wmv files .It looks quite a simple task I dont know why its is getting so complicated.I follow this Source- SampleGrabber-WMA sf writer. Here is my code IBaseFilter* pASFWriter; ICaptureGraphBuilder2 * pBuilder=NULL; CoCreateInstance(CLSID_CaptureGraphBuilder2,NULL,CLSCTX_INPROC_SERVER,IID_ICaptureGraphBuilder2,(LPVOID*)&pBuilder); pBuilder-SetFiltergraph(pGraphBuilder); pBuilder-SetOutputFileName(&MEDIASUBTYPE_Asf,OUTFILE,&pASFWriter,NULL); IConfigAsfWriter *pConfig=NULL; HRESULT hr80 = pASFWriter-QueryInterface(IID_IConfigAsfWriter, (void**)&pConfig); if (SUCCEEDED(hr80)) { // Configure the ASF Writer filter. pConfig-Release(); } IBaseFilter *pSource=NULL; pGraphBuilder->AddSourceFilter(FILENAME,L"Source",&pSource); IBaseFilter *pGrabberF2=NULL; ISampleGrabber *pGrabber2=NULL; CoCreateInstance(CLSID_SampleGrabber,NULL,CLSCTX_INPROC_SERVER,IID_PPV_ARGS(&pGrabberF2)); pGraphBuilder->AddFilter(pGrabberF2,L"Sample Grabber2"); AM_MEDIA_TYPE mt1; ZeroMemory(&mt1,sizeof(mt1)); mt1.majortype=MEDIATYPE_Video; mt1.subtype=MEDIASUBTYPE_RGB24; pGrabberF2->QueryInterface(IID_ISampleGrabber,(void**)(&pGrabber2)); pGrabber2->SetBufferSamples(TRUE); pGrabber2->SetOneShot(FALSE); pGrabber->SetMediaType(&mt1); pSource->EnumPins(&pEnum2); pEnum2->Next(1,&pPin2,NULL); HRESULT hr108=ConnectFilters(pGraphBuilder,pPin2,pGrabberF2);//Source to Grabber pGrabberF2->EnumPins(&pEnum3); IEnumPins *pEnum4=NULL; pASFWriter->EnumPins(&pEnum4); IPin* pPin4=NULL; while (S_OK==pEnum3->Next(1,&pPin3,NULL)&& S_OK==pEnum4->Next(1,&pPin4,NULL)){ pGraphBuilder->Connect(pPin3,pPin4);//Grabber to FileWriter } pGraphBuilder->RenderFile(FILENAME,NULL);//FILENAME=INPUTFILENAME (.wmv format) pMediaPosition->put_CurrentPosition(start); pMediaPosition->put_StopTime(stop); HRESULT test1=pMediaControl->Run(); All of it runs fine(returns S_OK) .But test1 returns E_FAIL and no file is created.Can somebody help?

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24  | Next Page >