Search Results

Search found 491 results on 20 pages for 'staging'.

Page 13/20 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • CLSF & CLK 2013 Trip Report by Jeff Liu

    - by jamesmorris
    This is a contributed post from Jeff Liu, lead XFS developer for the Oracle mainline Linux kernel team. Recently, I attended both the China Linux Storage and Filesystem workshop (CLSF), and the China Linux Kernel conference (CLK), which were held in Shanghai. Here are the highlights for both events. CLSF - 17th October XFS update (led by Jeff Liu) XFS keeps rapid progress with a lot of changes, especially focused on the infrastructure/performance improvements as well as  new feature development.  This can be reflected with a sample statistics among XFS/Ext4+JBD2/Btrfs via: # git diff --stat --minimal -C -M v3.7..v3.12-rc4 -- fs/xfs|fs/ext4+fs/jbd2|fs/btrfs XFS: 141 files changed, 27598 insertions(+), 19113 deletions(-) Ext4+JBD2: 39 files changed, 10487 insertions(+), 5454 deletions(-) Btrfs: 70 files changed, 19875 insertions(+), 8130 deletions(-) What made up those changes in XFS? Self-describing metadata(CRC32c). This is a new feature and it contributed about 70% code changes, it can be enabled via `mkfs.xfs -m crc=1 /dev/xxx` for v5 superblock. Transaction log space reservation improvements. With this change, we can calculate the log space reservation at mount time rather than runtime to reduce the the CPU overhead. User namespace support. So both XFS and USERNS can be enabled on kernel configuration begin from Linux 3.10. Thanks Dwight Engen's efforts for this thing. Split project/group quota inodes. Originally, project quota can not be enabled with group quota at the same time because they were share the same quota file inode, now it works but only for v5 super block. i.e, CRC enabled. CONFIG_XFS_WARN, an new lightweight runtime debugger which can be deployed in production environment. Readahead log object recovery, this change can speed up the log replay progress significantly. Speculative preallocation inode tracking, clearing and throttling. The main purpose is to deal with inodes with post-EOF space due to speculative preallocation, support improved quota management to free up a significant amount of unwritten space when at or near EDQUOT. It support backgroup scanning which occurs on a longish interval(5 mins by default, tunable), and on-demand scanning/trimming via ioctl(2). Bitter arguments ensued from this session, especially for the comparison between Ext4 and Btrfs in different areas, I have to spent a whole morning of the 1st day answering those questions. We basically agreed on XFS is the best choice in Linux nowadays because: Stable, XFS has a good record in stability in the past 10 years. Fengguang Wu who lead the 0-day kernel test project also said that he has observed less error than other filesystems in the past 1+ years, I own it to the XFS upstream code reviewer, they always performing serious code review as well as testing. Good performance for large/small files, XFS does not works very well for small files has already been an old story for years. Best choice (maybe) for distributed PB filesystems. e.g, Ceph recommends delopy OSD daemon on XFS because Ext4 has limited xattr size. Best choice for large storage (>16TB). Ext4 does not support a single file more than around 15.95TB. Scalability, any objection to XFS is best in this point? :) XFS is better to deal with transaction concurrency than Ext4, why? The maximum size of the log in XFS is 2038MB compare to 128MB in Ext4. Misc. Ext4 is widely used and it has been proved fast/stable in various loads and scenarios, XFS just need more customers, and Btrfs is still on the road to be a manhood. Ceph Introduction (Led by Li Wang) This a hot topic.  Li gave us a nice introduction about the design as well as their current works. Actually, Ceph client has been included in Linux kernel since 2.6.34 and supported by Openstack since Folsom but it seems that it has not yet been widely deployment in production environment. Their major work is focus on the inline data support to separate the metadata and data storage, reduce the file access time, i.e, a file access need communication twice, fetch the metadata from MDS and then get data from OSD, and also, the small file access is limited by the network latency. The solution is, for the small files they would like to store the data at metadata so that when accessing a small file, the metadata server can push both metadata and data to the client at the same time. In this way, they can reduce the overhead of calculating the data offset and save the communication to OSD. For this feature, they have only run some small scale testing but really saw noticeable improvements. Test environment: Intel 2 CPU 12 Core, 64GB RAM, Ubuntu 12.04, Ceph 0.56.6 with 200GB SATA disk, 15 OSD, 1 MDS, 1 MON. The sequence read performance for 1K size files improved about 50%. I have asked Li and Zheng Yan (the core developer of Ceph, who also worked on Btrfs) whether Ceph is really stable and can be deployed at production environment for large scale PB level storage, but they can not give a positive answer, looks Ceph even does not spread over Dreamhost (subject to confirmation). From Li, they only deployed Ceph for a small scale storage(32 nodes) although they'd like to try 6000 nodes in the future. Improve Linux swap for Flash storage (led by Shaohua Li) Because of high density, low power and low price, flash storage (SSD) is a good candidate to partially replace DRAM. A quick answer for this is using SSD as swap. But Linux swap is designed for slow hard disk storage, so there are a lot of challenges to efficiently use SSD for swap. SWAPOUT swap_map scan swap_map is the in-memory data structure to track swap disk usage, but it is a slow linear scan. It will become a bottleneck while finding many adjacent pages in the use of SSD. Shaohua Li have changed it to a cluster(128K) list, resulting in O(1) algorithm. However, this apporoach needs restrictive cluster alignment and only enabled for SSD. IO pattern In most cases, the swap io is in interleaved pattern because of mutiple reclaimers or a free cluster is shared by all reclaimers. Even though block layer can merge interleaved IO to some extent, but we cannot count on it completely. Hence the per-cpu cluster is added base on the previous change, it can help reclaimer do sequential IO and the block layer will be easier to merge IO. TLB flush: If we're reclaiming one active page, we should first move the page from active lru list to inactive lru list, and then reclaim the page from inactive lru to swap it out. During the process, we need to clear PTE twice: first is 'A'(ACCESS) bit, second is 'P'(PRESENT) bit. Processors need to send lots of ipi which make the TLB flush really expensive. Some works have been done to improve this, including rework smp_call_functiom_many() or remove the first TLB flush in x86, but there still have some arguments here and only parts of works have been pushed to mainline. SWAPIN: Page fault does iodepth=1 sync io, but it's a little waste if only issue a page size's IO. The obvious solution is doing swap readahead. But the current in-kernel swap readahead is arbitary(always 8 pages), and it always doesn't perform well for both random and sequential access workload. Shaohua introduced a new flag for madvise(MADV_WILLNEED) to do swap prefetch, so the changes happen in userspace API and leave the in-kernel readahead unchanged(but I think some improvement can also be done here). SWAP discard As we know, discard is important for SSD write throughout, but the current swap discard implementation is synchronous. He changed it to async discard which allow discard and write run in the same time. Meanwhile, the unit of discard is also optimized to cluster. Misc: lock contention For many concurrent swapout and swapin , the lock contention such as anon_vma or swap_lock is high, so he changed the swap_lock to a per-swap lock. But there still have some lock contention in very high speed SSD because of swapcache address_space lock. Zproject (led by Bob Liu) Bob gave us a very nice introduction about the current memory compression status. Now there are 3 projects(zswap/zram/zcache) which all aim at smooth swap IO storm and promote performance, but they all have their own pros and cons. ZSWAP It is implemented based on frontswap API and it uses a dynamic allocater named Zbud to allocate free pages. Zbud means pairs of zpages are "buddied" and it can only store at most two compressed pages in one page frame, so the max compress ratio is 50%. Each page frame is lru-linked and can do shink in memory pressure. If the compressed memory pool reach its limitation, shink or reclaim happens. It decompress the page frame into two new allocated pages and then write them to real swap device, but it can fail when allocating the two pages. ZRAM Acts as a compressed ramdisk and used as swap device, and it use zsmalloc as its allocator which has high density but may have fragmentation issues. Besides, page reclaim is hard since it will need more pages to uncompress and free just one page. ZRAM is preferred by embedded system which may not have any real swap device. Now both ZRAM and ZSWAP are in driver/staging tree, and in the mm community there are some disscussions of merging ZRAM into ZSWAP or viceversa, but no agreement yet. ZCACHE Handles file page compression but it is removed out of staging recently. From industry (led by Tang Jie, LSI) An LSI engineer introduced several new produces to us. The first is raid5/6 cards that it use full stripe writes to improve performance. The 2nd one he introduced is SandForce flash controller, who can understand data file types (data entropy) to reduce write amplification (WA) for nearly all writes. It's called DuraWrite and typical WA is 0.5. What's more, if enable its Dynamic Logical Capacity function module, the controller can do data compression which is transparent to upper layer. LSI testing shows that with this virtual capacity enables 1x TB drive can support up to 2x TB capacity, but the application must monitor free flash space to maintain optimal performance and to guard against free flash space exhaustion. He said the most useful application is for datebase. Another thing I think it's worth to mention is that a NV-DRAM memory in NMR/Raptor which is directly exposed to host system. Applications can directly access the NV-DRAM via a memory address - using standard system call mmap(). He said that it is very useful for database logging now. This kind of NVM produces are beginning to appear in recent years, and it is said that Samsung is building a research center in China for related produces. IMHO, NVM will bring an effect to current os layer especially on file system, e.g. its journaling may need to redesign to fully utilize these nonvolatile memory. OCFS2 (led by Canquan Shen) Without a doubt, HuaWei is the biggest contributor to OCFS2 in the past two years. They have posted 46 upstream patches and 39 patches have been merged. Their current project is based on 32/64 nodes cluster, but they also tried 128 nodes at the experimental stage. The major work they are working is to support ATS (atomic test and set), it can be works with DLM at the same time. Looks this idea is inspired by the vmware VMFS locking, i.e, http://blogs.vmware.com/vsphere/2012/05/vmfs-locking-uncovered.html CLK - 18th October 2013 Improving Linux Development with Better Tools (Andi Kleen) This talk focused on how to find/solve bugs along with the Linux complexity growing. Generally, we can do this with the following kind of tools: Static code checkers tools. e.g, sparse, smatch, coccinelle, clang checker, checkpatch, gcc -W/LTO, stanse. This can help check a lot of things, simple mistakes, complex problems, but the challenges are: some are very slow, false positives, may need a concentrated effort to get false positives down. Especially, no static checker I found can follow indirect calls (“OO in C”, common in kernel): struct foo_ops { int (*do_foo)(struct foo *obj); } foo->do_foo(foo); Dynamic runtime checkers, e.g, thread checkers, kmemcheck, lockdep. Ideally all kernel code would come with a test suite, then someone could run all the dynamic checkers. Fuzzers/test suites. e.g, Trinity is a great tool, it finds many bugs, but needs manual model for each syscall. Modern fuzzers around using automatic feedback, but notfor kernel yet: http://taviso.decsystem.org/making_software_dumber.pdf Debuggers/Tracers to understand code, e.g, ftrace, can dump on events/oops/custom triggers, but still too much overhead in many cases to run always during debug. Tools to read/understand source, e.g, grep/cscope work great for many cases, but do not understand indirect pointers (OO in C model used in kernel), give us all “do_foo” instances: struct foo_ops { int (*do_foo)(struct foo *obj); } = { .do_foo = my_foo }; foo>do_foo(foo); That would be great to have a cscope like tool that understands this based on types/initializers XFS: The High Performance Enterprise File System (Jeff Liu) [slides] I gave a talk for introducing the disk layout, unique features, as well as the recent changes.   The slides include some charts to reflect the performances between XFS/Btrfs/Ext4 for small files. About a dozen users raised their hands when I asking who has experienced with XFS. I remembered that when I asked the same question in LinuxCon/Japan, only 3 people raised their hands, but they are Chris Mason, Ric Wheeler, and another attendee. The attendee questions were mainly focused on stability, and comparison with other file systems. Linux Containers (Feng Gao) The speaker introduced us that the purpose for those kind of namespaces, include mount/UTS/IPC/Network/Pid/User, as well as the system API/ABI. For the userspace tools, He mainly focus on the Libvirt LXC rather than us(LXC). Libvirt LXC is another userspace container management tool, implemented as one type of libvirt driver, it can manage containers, create namespace, create private filesystem layout for container, Create devices for container and setup resources controller via cgroup. In this talk, Feng also mentioned another two possible new namespaces in the future, the 1st is the audit, but not sure if it should be assigned to user namespace or not. Another is about syslog, but the question is do we really need it? In-memory Compression (Bob Liu) Same as CLSF, a nice introduction that I have already mentioned above. Misc There were some other talks related to ACPI based memory hotplug, smart wake-affinity in scheduler etc., but my head is not big enough to record all those things. -- Jeff Liu

    Read the article

  • Oracle Data Integrator 11.1.1.5 Complex Files as Sources and Targets

    - by Alex Kotopoulis
    Overview ODI 11.1.1.5 adds the new Complex File technology for use with file sources and targets. The goal is to read or write file structures that are too complex to be parsed using the existing ODI File technology. This includes: Different record types in one list that use different parsing rules Hierarchical lists, for example customers with nested orders Parsing instructions in the file data, such as delimiter types, field lengths, type identifiers Complex headers such as multiple header lines or parseable information in header Skipping of lines  Conditional or choice fields Similar to the ODI File and XML File technologies, the complex file parsing is done through a JDBC driver that exposes the flat file as relational table structures. Complex files are mapped to one or more table structures, as opposed to the (simple) file technology, which always has a one-to-one relationship between file and table. The resulting set of tables follows the same concept as the ODI XML driver, table rows have additional PK-FK relationships to express hierarchy as well as order values to maintain the file order in the resulting table.   The parsing instruction format used for complex files is the nXSD (native XSD) format that is already in use with Oracle BPEL. This format extends the XML Schema standard by adding additional parsing instructions to each element. Using nXSD parsing technology, the native file is converted into an internal XML format. It is important to understand that the XML is streamed to improve performance; there is no size limitation of the native file based on memory size, the XML data is never fully materialized.  The internal XML is then converted to relational schema using the same mapping rules as the ODI XML driver. How to Create an nXSD file Complex file models depend on the nXSD schema for the given file. This nXSD file has to be created using a text editor or the Native Format Builder Wizard that is part of Oracle BPEL. BPEL is included in the ODI Suite, but not in standalone ODI Enterprise Edition. The nXSD format extends the standard XSD format through nxsd attributes. NXSD is a valid XML Schema, since the XSD standard allows extra attributes with their own namespaces. The following is a sample NXSD schema: <?xml version="1.0"?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:nxsd="http://xmlns.oracle.com/pcbpel/nxsd" elementFormDefault="qualified" xmlns:tns="http://xmlns.oracle.com/pcbpel/demoSchema/csv" targetNamespace="http://xmlns.oracle.com/pcbpel/demoSchema/csv" attributeFormDefault="unqualified" nxsd:encoding="US-ASCII" nxsd:stream="chars" nxsd:version="NXSD"> <xsd:element name="Root">         <xsd:complexType><xsd:sequence>       <xsd:element name="Header">                 <xsd:complexType><xsd:sequence>                         <xsd:element name="Branch" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy=","/>                         <xsd:element name="ListDate" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}"/>                         </xsd:sequence></xsd:complexType>                         </xsd:element>                 </xsd:sequence></xsd:complexType>         <xsd:element name="Customer" maxOccurs="unbounded">                 <xsd:complexType><xsd:sequence>                 <xsd:element name="Name" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy=","/>                         <xsd:element name="Street" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="," />                         <xsd:element name="City" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}" />                         </xsd:sequence></xsd:complexType>                         </xsd:element>                 </xsd:sequence></xsd:complexType> </xsd:element> </xsd:schema> The nXSD schema annotates elements to describe their position and delimiters within the flat text file. The schema above uses almost exclusively the nxsd:terminatedBy instruction to look for the next terminator chars. There are various constructs in nXSD to parse fixed length fields, look ahead in the document for string occurences, perform conditional logic, use variables to remember state, and many more. nXSD files can either be written manually using an XML Schema Editor or created using the Native Format Builder Wizard. Both Native Format Builder Wizard as well as the nXSD language are described in the Application Server Adapter Users Guide. The way to start the Native Format Builder in BPEL is to create a new File Adapter; in step 8 of the Adapter Configuration Wizard a new Schema for Native Format can be created:   The Native Format Builder guides through a number of steps to generate the nXSD based on a sample native file. If the format is complex, it is often a good idea to “approximate” it with a similar simple format and then add the complex components manually.  The resulting *.xsd file can be copied and used as the format for ODI, other BPEL constructs such as the file adapter definition are not relevant for ODI. Using this technique it is also possible to parse the same file format in SOA Suite and ODI, for example using SOA for small real-time messages, and ODI for large batches. This nXSD schema in this example describes a file with a header row containing data and 3 string fields per row delimited by commas, for example: Redwood City Downtown Branch, 06/01/2011 Ebeneezer Scrooge, Sandy Lane, Atherton Tiny Tim, Winton Terrace, Menlo Park The ODI Complex File JDBC driver exposes the file structure through a set of relational tables with PK-FK relationships. The tables for this example are: Table ROOT (1 row): ROOTPK Primary Key for root element SNPSFILENAME Name of the file SNPSFILEPATH Path of the file SNPSLOADDATE Date of load Table HEADER (1 row): ROOTFK Foreign Key to ROOT record ROWORDER Order of row in native document BRANCH Data BRANCHORDER Order of Branch within row LISTDATE Data LISTDATEORDER Order of ListDate within row Table ADDRESS (2 rows): ROOTFK Foreign Key to ROOT record ROWORDER Order of row in native document NAME Data NAMEORDER Oder of Name within row STREET Data STREETORDER Order of Street within row CITY Data CITYORDER Order of City within row Every table has PK and/or FK fields to reflect the document hierarchy through relationships. In this example this is trivial since the HEADER and all CUSTOMER records point back to the PK of ROOT. Deeper nested documents require this to identify parent elements. All tables also have a ROWORDER field to define the order of rows, as well as order fields for each column, in case the order of columns varies in the original document and needs to be maintained. If order is not relevant, these fields can be ignored. How to Create an Complex File Data Server in ODI After creating the nXSD file and a test data file, and storing it on the local file system accessible to ODI, you can go to the ODI Topology Navigator to create a Data Server and Physical Schema under the Complex File technology. This technology follows the conventions of other ODI technologies and is very similar to the XML technology. The parsing settings such as the source native file, the nXSD schema file, the root element, as well as the external database can be set in the JDBC URL: The use of an external database defined by dbprops is optional, but is strongly recommended for production use. Ideally, the staging database should be used for this. Also, when using a complex file exclusively for read purposes, it is recommended to use the ro=true property to ensure the file is not unnecessarily synchronized back from the database when the connection is closed. A data file is always required to be present  at the filename path during design-time. Without this file, operations like testing the connection, reading the model data, or reverse engineering the model will fail.  All properties of the Complex File JDBC Driver are documented in the Oracle Fusion Middleware Connectivity and Knowledge Modules Guide for Oracle Data Integrator in Appendix C: Oracle Data Integrator Driver for Complex Files Reference. David Allan has created a great viewlet Complex File Processing - 0 to 60 which shows the creation of a Complex File data server as well as a model based on this server. How to Create Models based on an Complex File Schema Once physical schema and logical schema have been created, the Complex File can be used to create a Model as if it were based on a database. When reverse-engineering the Model, data stores(tables) for each XSD element of complex type will be created. Use of complex files as sources is straightforward; when using them as targets it has to be made sure that all dependent tables have matching PK-FK pairs; the same applies to the XML driver as well. Debugging and Error Handling There are different ways to test an nXSD file. The Native Format Builder Wizard can be used even if the nXSD wasn’t created in it; it will show issues related to the schema and/or test data. In ODI, the nXSD  will be parsed and run against the existing test XML file when testing a connection in the Dataserver. If either the nXSD has an error or the data is non-compliant to the schema, an error will be displayed. Sample error message: Error while reading native data. [Line=1, Col=5] Not enough data available in the input, when trying to read data of length "19" for "element with name D1" from the specified position, using "style" as "fixedLength" and "length" as "". Ensure that there is enough data from the specified position in the input. Complex File FAQ Is the size of the native file limited by available memory? No, since the native data is streamed through the driver, only the available space in the staging database limits the size of the data. There are limits on individual field sizes, though; a single large object field needs to fit in memory. Should I always use the complex file driver instead of the file driver in ODI now? No, use the file technology for all simple file parsing tasks, for example any fixed-length or delimited files that just have one row format and can be mapped into a simple table. Because of its narrow assumptions the ODI file driver is easy to configure within ODI and can stream file data without writing it into a database. The complex file driver should be used whenever the use case cannot be handled through the file driver. Are we generating XML out of flat files before we write it into a database? We don’t materialize any XML as part of parsing a flat file, either in memory or on disk. The data produced by the XML parser is streamed in Java objects that just use XSD-derived nXSD schema as its type system. We use the nXSD schema because is the standard for describing complex flat file metadata in Oracle Fusion Middleware, and enables users to share schemas across products. Is the nXSD file interchangeable with SOA Suite? Yes, ODI can use the same nXSD files as SOA Suite, allowing mixed use cases with the same data format. Can I start the Native Format Builder from the ODI Studio? No, the Native Format Builder has to be started from a JDeveloper with BPEL instance. You can get BPEL as part of the SOA Suite bundle. Users without SOA Suite can manually develop nXSD files using XSD editors. When is the database data written back to the native file? Data is synchronized using the SYNCHRONIZE and CREATE FILE commands, and when the JDBC connection is closed. It is recommended to set the ro or read_only property to true when a file is exclusively used for reading so that no unnecessary write-backs occur. Is the nXSD metadata part of the ODI Master or Work Repository? No, the data server definition in the master repository only contains the JDBC URL with file paths; the nXSD files have to be accessible on the file systems where the JDBC driver is executed during production, either by copying or by using a network file system. Where can I find sample nXSD files? The Application Server Adapter Users Guide contains nXSD samples for various different use cases.

    Read the article

  • No such file to load bundler error for Rails 3

    - by kgpdeveloper
    I have a Rails 3 app ready for staging. I haven't got a VPS host set up yet. As I was planning to have everything on shared host for the first few months. Problem: cd myapp bundle check result: The Gemfile's dependencies are satisfied Passenger error: Error message: no such file to load -- bundler Exception class: LoadError Frustrating thing about shared hosts is that I have to add these lines on config.ru: ENV['GEM_HOME'] = '/home/username/.gems' ENV['GEM_PATH'] = '$GEM_HOME:/usr/lib/ruby/gems/1.8' Still no luck. Same no such file to load bundler error appears. Has anybody got this working? Rails 3, Debian, shared host (dreamhost)? I could just go ahead and register on Slicehost/Fivebean but before I do, I'd like to know why that error is showing up. Thanks.

    Read the article

  • ViewContext.RouteData.Values["action"] is null on server... works fine on local machine

    - by rksprst
    I'm having a weird issue where ViewContext.RouteData.Values["action"] is null on my staging server, but works fine on my dev machine (asp.net development server). The code is simple: public string CheckActiveClass(string actionName) { string text = ""; if (ViewContext.RouteData.Values["action"].ToString() == actionName) { text = "selected"; } return text; } I get the error on the ViewContext.RouteData.Values["action"] line. The error is: Exception Details: System.NullReferenceException: Object reference not set to an instance of an object. Any help is appreciated. Thanks in advance.

    Read the article

  • Netbeans - *Easy* way to tell which Netbeans project a file belongs to

    - by sdek
    I was wondering if there is a way in Netbeans (v 6.8, or whatever) to have the editor tabs colored based on which project the file belongs to? Or some other easy way to distinguish which project a file belongs to. Basically my problem is that I have 3-4 netbeans projects, which all have a very similar code base (production version, staging version, development version 1, development version 2), and sometimes I have files open from a few different projects, and I get confused as to which project the file belongs to. And I know that you can hover over the tab and you can see which project it belongs to, but it would be fantastic if there was another, easier way to visually distinguish which project a file belongs to.

    Read the article

  • Maven Plugin - Restart Jetty with new WAR?

    - by Walter White
    Hi all, What I would like to do is automatically test against several different maven build profiles. I want to write a maven plugin that iterates through each profile so I don't have to manually list them for the CI process. I just want to verify that the code works in all development, testing, staging, and production once deployed there. I want it to automatically test against those profiles so I could keep it a part of the same maven build? How would I best set that up to log those changes in Sonar or another tool? Walter

    Read the article

  • HttpWebRequest to different IP than the domain resolves to

    - by fyjham
    Hey, Long story short an API I'm calling's different environments (dev/staging/uat/live) is set up by putting a host-record on the server so the live domain resolves to their other server in for the HTTP request. The problem is that they've done this with so many different environments that we don't have enough servers to use the server-wide host files for it anymore (We've got some environments running off the same servers - luckily not dev and live though :P). I'm wondering if there's a way to make WebRequest request to a domain but explicitly specify the IP of the server it should connect to? Or is there any way of doing this short of going all the way down to socket connections (Which I'd really prefer not to waste time/create bugs by trying to re-implementing the HTTP protocol). PS: I've tried and we can't just get a new sub-domain for each environment.

    Read the article

  • Push DVCS repository to master without needing codebase

    - by Scorchin
    To work on a client's staging environment I have to connect through a VPN which locks all normal network traffic and prevents any connection to the Internet. This would immediately prevent any of the "normal" VCS solutions from being used as it's not possible to gain access to the server. A solution to this would be to create a DVCS repository (git?) locally and then push changes to the master, as and when needed. There is one flaw in this plan. The entire codebase is around 14GB. To download all of this over the internet would take some time, especially when I'm likely to be working on 3 or 4 different machines in each case. This seems silly and overkill for a DVCS. TL;DR Can any DVCS solution allow you to push to a master server/repo without needing the codebase? Bad example: copy the .git folder (not the 14GB codebase) to another directory and push this to the master once disconnected from the VPN.

    Read the article

  • Proper way to bind WCF webservice to both HTTP (for dev) and HTTPS (for production)

    - by Nicholas H
    Seems like this would be fairly straightfoward but I can't figure it out. In lieu of using an ASMX web service, I'm trying to go with WCF.. and finding it hard to figure out bindings. I would like to be able to use HTTP to connect to the WCF service for local development (and on our "staging" server), but require HTTPS on our production server. Should this be possible with two bindings? I cannot get it to work. If someone could provide an example of just a very basic HTTP and HTTPS WCF setup, I'd be eternally grateful. Or point me to a book/website/etc. which solves all the mysteries of WCF.. that'd be great. Because right now it's looking easier to just go back to ASMX.

    Read the article

  • Refernce platform specific System.Data.SQLite

    - by Dmitriy Nagirnyak
    Hi, I am using SQLite for the unit testing and might use it as a database for local development/staging. The System.Data.SQLite has basically 2 versions: x86 and x64. Correct one should be used for the specific platform. I have 64 bit Win7, other guys in the team might use 32-bit OSs. The server's platform is not known at this stage. If I use 32-bit version of the assembly on 64-bit platform I get BadImageFormatException: Could not load file or assembly 'System.Data.SQLite'. I believe similar will happen trying to use 64-bit assembly on 32-bit platform. So my question is what is the best way to reference the SQLite assembly so that it does not depend on the platform and people can just use it? It is ok to use 32-bit version of assembly on a 64-bit platform (Maybe there is a switch for that somewhere?). Thanks, Dmitriy.

    Read the article

  • date_create_from_format equivalent for PHP 5.2 (or lower)

    - by leekelleher
    Hi all, I'm working with PHP 5.3 on my local machine and needed to parse a UK date format (dd/mm/yyyy). I found that strtotime didn't work with that date format, so I used date_create_from_format instead - which works great. Now, my problem is that my staging server is running PHP 5.2, and date_create_from_format doesn't work on that version. (It's a shared server, and wouldn't have a clue how to upgrade it to PHP 5.3) So is there a similar function to date_create_from_format that I can use? Bespoke or PHP native? Many thanks, - Lee

    Read the article

  • Trouble with Powershell and running a complex commandline

    - by Frank Rosario
    Hi, I've been trying to run the following command line from a Powershell build script we have; but keep running into issues & 'C:\Dev\Yadda\trunk\BuildScripts\U tilities\csmanage.exe' /create-deployme nt /name:yadddayaddyaddadev /label:yadddayaddyaddadev /package:https://yadddayaddyadda.blob.core.windows.net/mydeployments/20100426_202848_FamilyMoments.cspk g /config:C:\Dev\WalmartOne\trunk\yadddayaddyadda.CloudService\bin\Debug\ServiceCon figuration.cscfg /slot:Staging /hosted-service:yadddayaddyadda-dev" Note: the space in "Utilities" is intentional; trying to snif out a bug involving spaces in the executable path. I assure you, the path does exist with the space in it on my machine. What's the best way to call this command line from Powershell? I've tried Invoke-Expression, Diagnostic.Process::Start, &; each method coming up with some different type of error; usually that it could find the executable. Any constructive input is greatly appreciated. Thanks.

    Read the article

  • Page containing Microsoft Chart Control returns Service Unavailable

    - by MHinton
    I have a frustrating problem with an asp.net mvc view containing the Microsoft Chart control. When I request the view containing the control I get the following error. Service Unavailable HTTP Error 503. The service is unavailable. When I run the project under the visual studio 2008 dev server it works fine. When I deploy the project to the staging server I get the error. To make this even more frustrating when I deploy to a different site on the same server under a virtual directory it works. I also get no error messages in the event log or elmah when this happens. Has anyone else encountered this? What did you do to resolve it?

    Read the article

  • Increase file upload size limit in iis6

    - by JustFoo
    Is there any other place besides the metabase.xml file where the file upload size can be modified? I am currently running a staging server with IIS6 and it is setup to allow uploading of files up to 20mb. This works perfectly fine. I have a new production server where I am trying to setup this same available size limit. So I edited the metabase.xml file and set it to 20971520. Then I restarted IIS and that didn't work. So I then restarted the entire server, that also didn't work. I can upload files around 2mb so it is definitely allowing file sizes larger then the standard 200kb default size. I try uploading a 5mb file and my upload.aspx page completely crashes. Is it possible there is something else I need to configure? The production server is located on a server farm, could there be some limits set on there end? Thanks

    Read the article

  • Easily switching ConnectionStrings on publish to Azure

    - by David Pfeffer
    I'm currently building an Azure Web Role. I am testing this project against a local database server on localhost. Then, when confident that the project is working, I publish it to Staging on Windows Azure. However, I also have to remember to change the connection string to point to the live SQL server on SQL Azure before deploying, and then change it back to localhost afterwards. Is there any nice way to automate this, or perhaps a different process to take to avoid the issue altogether? For example is there a way to have a configuration file for Azure that isn't updated with every deploy?

    Read the article

  • Modify installed SharePoint feature

    - by Laura L
    I have written a sequential workflow in SharePoint on our development environment. After testing, we decided to deploy this workflow as a feature on the staging environment. We did the following: copied the strongly named assembly to the GAC using gacutil copied feature.xml and workflow.xml to WebServerExtensions/12/templates/features/someFolder installed feature (stsadm command) activated feature (stsadm command) All worked exactly as planned and the workflow behaved correctly. The problem was, we decided to change something in the code (a message was not very self explanatory), so on the development machine we updated the message as requested and rebuilt the project. The problem is, we cannot seem to find a way to correctly get rid of the previous version of this workflow/feature. To deploy the upgrade, we: deactivated and uninstalled the feature (stsadm commands), removed also from GAC. increased the version of the assembly performed steps 1 to 4 from above. When using the workflow we are still getting the first message, we cannot find a way to get the new message to be displayed. What are we missing?

    Read the article

  • How to apply coding methodologies and practices to non-coding work?

    - by Dan
    I can talk for hours about best-practice, source control, change management, feature tracking, development cycles and the lot, but most of what I've learnt or read seems to apply to nuts-and-bolts programming of compiled applications. You know, ASCII files that gets turned into 1s and 0s. How does one apply the same discipline and wisdom to working in environments that are point-and-click, config-centric. I'm thinking of CMSs and specifically, my current 9 to 5, SharePoint. Traditional practices of source control, dev-staging-production seem to break down since we're not working with code, and the live environment changes with user input. So to sum up a rather lengthy question, what works in a no-code environment?

    Read the article

  • Reference platform specific System.Data.SQLite

    - by Dmitriy Nagirnyak
    I am using SQLite for the unit testing and might use it as a database for local development/staging. The System.Data.SQLite has basically 2 versions: x86 and x64. Correct one should be used for the specific platform. I have 64 bit Win7, other guys in the team might use 32-bit OSs. The server's platform is not known at this stage. If I use 32-bit version of the assembly on 64-bit platform I get BadImageFormatException: Could not load file or assembly 'System.Data.SQLite'. I believe similar will happen trying to use 64-bit assembly on 32-bit platform. So my question is what is the best way to reference the SQLite assembly so that it does not depend on the platform and people can just use it? It is ok to use 32-bit version of assembly on a 64-bit platform (Maybe there is a switch for that somewhere?).

    Read the article

  • How safe is JSONP implementation for login functionality

    - by MKS
    Hi Guys, I am using JSONP for login authentication, below is sample JQuery Code: $.ajax({ type:"GET", url: "https://staging/login/Login.aspx", // Send the login info to this page data: str, dataType: "jsonp", timeout: 200000, jsonp:"skywardDetails", success: function(result) { //Do something after the success } }); In above code, I am having HTTPS page for authentication, from my login dailog box, I am sending username and password to my login.aspx page, which calls "WEB SERVICE" taking input send by the login dialog page and return the users details as JSONP object. My question is that, how safe is above implementation and do also suggest how can I improve my security implementation. Thanks!

    Read the article

  • automatic push to CDN deployment strategy

    - by imanc
    Does anyone have ideas for a strategy to push content to a CDN upon deployment? The key issue I'm facing is that we have a site that is available in various contexts: local development, development server, staging, then finally live. The liver version of the site needs to load assets from a domain, which will be pointed to a CDN: assets.domain.com. However, we will have numerous references to the assets pointing to a relative folder, e.g. /images/ in css, possibly in js, and in HTML & source. Our new site will use capistrano for deployment and it may be that we can hook in another build tool (apache ant?) or some custom script to search / replace paths. I am wondering if anyone has had to deal with this issue before and what solutions you put in place to automate managing the CDN in terms of pushing content up to the CDN and managing html & css references to assets in the CDN. Thanks Imanc

    Read the article

  • JQuery Tooltip and Scrollable Fix

    - by KrippledShark
    I have set up the navigation on my site using a scrollable div created with Flowplayer's JQuery tools. I also want to add a tooltip to the individual images within the scrollable div. The problem is the tooltip needs to appear outside of the div. I can adjust the height of the div so you can see the tooltip, but the tooltip is still constrained inside another div. Here is the link to my code. http://staging.asla.org/sustainablesites/TestAGAIN.html Is there a way to make this tooltip (div) appear out of it's containing divs?

    Read the article

  • Is .gitignore not working or I have misunderstood it?

    - by Shubham
    I am very new to git. I have a .gitignore in the my working folder. *.jpg *.gif *.png system/* */Zend/* .idea/*.* Well, I did git init and then git add *. At this it worked fine and ignored the above files. But when I did some changes, ran the same command it puts the ignored files into staging area. The reason why I am using git add * is because I work on many files and adding each file would be a overkill. Update: Here are messages when I run git add * second time.. #new file: application/vendors/Zend/XmlRpc/Value/String.php #new file: application/vendors/Zend/XmlRpc/Value/Struct.php ... The list is too long.

    Read the article

  • Is git svn rebase required before git svn dcommit?

    - by allyourcode
    I'm reading about using git as an svn client here: http://learn.github.com/p/git-svn.html That page suggests that you do git svn rebase before git svn dcommit, which makes perfect sense; it's like doing svn update before doing svn commit. Then, I started looking at the documentation for git svn dcommit (I was wondering what the 'd' is about): http://www.kernel.org/pub/software/scm/git/docs/git-svn.html You have to scroll down a bit to see the documentation on dcommit, which says this: Commit each diff from a specified head directly to the SVN repository, and then rebase or reset (depending on whether or not there is a diff between SVN and head). This confuses me, because if you do as the first page says, there will be no changes to pull down from svn once the first part of dcommit finishes. I'm also confused by the part that talks about reset; isn't git reset for removing changes from the staging area? Why would rebase or reset follow (the first part of) a dcommit?

    Read the article

  • TFS no release folder in build folder

    - by brian b
    I have a tfs build that works fine on the client, but when executed on the server, no actual binaries get created. When I go to the folder: \[MyServer]\builds[BuildName], I see BuildLog.txt ErrorsWarningsLog.txt Release.txt I expect to see a big \Release folder full of my dlls, but I get nothing. The error log reports no problems up until we ask the build to copy the binaries to our staging server. If I comment those out, I get no errors. CustomizableOutDir is true, DropLocation is set to something sensible BuildDirectoryPath is set to something sensible But no matter what, I just don't get any dlls built. Our local TFS guy is baffled too. Any suggestions?

    Read the article

  • Data Warehouse: One Database or many?

    - by drrollins
    At my new company, they keep all data associated with the data warehouse, including import, staging, audit, dimension and fact tables, together in the same physical database. I've been a database developer for a number of years now and this consolidation of function and form seems counter to everything I know. It seems to make security, backup/restore and performance management issues more manually intensive. Is this something that is done in the industry? Are there substantial reasons for doing or not doing it? The platform is Netezza. The size is in terabytes, hundreds of millions of rows. What I'm looking to get from answers to this question is a solid understanding of how right or wrong this path is. From your experience, what are the issues I should be focused on arguing if this is a path that will cause trouble for us down the road. If it is no big deal, then I'd like to know that as well.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >