Search Results

Search found 169 results on 7 pages for 'dis'.

Page 6/7 | < Previous Page | 2 3 4 5 6 7  | Next Page >

  • Asterisk: Originate API - Which card to use to detect busy/ringing/answer event for FXO

    - by spkhaira
    I want to use Originate API of Asterisk to place an outbound call on a FXO channel, for testing purpose I am using X100P card and, as expected, card is not able to detect if the number is busy/ringing or when it is answered. I want to know which card should I use so that I can get such basic events ... I am not really interested in detailed call progress analysis for answering machine or live voice. I just need basic busy/ringing and answer events and maybe a dis-connect event. Thanks.

    Read the article

  • get drop down value using dojo

    - by vetri
    I have struts code like <html:select property="ce"> <html:option value = "5">5</html:option> <html:option value = "10">10</html:option> <html:option value = "15">15</html:option> </html:select> <div id="dis"> <div> if a option is selected,dojo should get the valu and multiply by 10 and display that in the div?how to do that.

    Read the article

  • How should 4 decimals places behave, being simple yet powerful

    - by vener
    I have a UI question that troubled me on the best method to handle 4 decimal places for prices. In an table already cramped full of data, I would want to simplified the interface to make it not so cluttered. The actual current UI is shown below. http://i41.tinypic.com/bg5tub.jpg The problem is, for a unit price/units/D.Price and Dis.(Discount) to have 4 decimal places ($0.3459) is quite rare but it still happens (5 in 100 entries). This will result a lot of junk decimal places, cluttering up the interface. What is the best solution to this problem? In short, I want to declutter it yet maintain the precision. Note: This is web app

    Read the article

  • What is the best practise for relational database tables in mysql?

    - by George
    Hi, I know, there is a lot of info on mysql out there. But I was not really able to find an answer to this specific and actually simple question: Let's say I have two tables: USERS (with many fields, e.g. name, street, email, etc.) and GROUPS (also with many fields) The relation is (I guess?) 1:n, that is ONE user can be a member of MANY groups. What I dis, is create another table, named USERS_GROUPS_REL. This table has only two fields: us_id (unique key of table USERS) and gr_id (unique key of table GROUPS) In PHP I do a query with join. Is this "best practice" or is there a better way? Thankful for any hint!

    Read the article

  • Oracle Partner Architects Training

    - by mseika
    Dear Oracle Partner, There is a lot more to Oracle technology than meets the eye. Sure, you already belong to a small circle of our most experienced and committed partners. But are you making the best use possible of our technology solutions? Put it to the test.  Join the “Oracle Partner Architects Training”. It is aimed at providing your experts, architects and consultants with in-depth architectural knowledge about Oracle technology. Here is your chance to learn from the best. Seasoned speakers, exclusive content and no product marketing. Oracle technology beyond the obvious. Choose from any of the 40 recorded training sessions. Topics include:  • Security• Service integration • Database and options• Data integration • BI and applications• Applications and infrastructure• Hardware and software combinations The market and Oracle value specialized partners More information about specialization can be found on opn.oracle.com. Click through to OPN Program/Specialize “What’s in it for us?” Quite simply: the opportunity to gain the differentiation and competitive edge you need to stand out in the marketplace. • Differentiate your company through expertise in leading Oracle IT solutions;• Get your experts, architects and consultants up to speed on specialized services and solutions;• Make our customers’ shortlists. They are looking for value-added solutions for their business.   Recordings All sessions are recorded. After registering for a session in oraevents, you will receive the info to access the webex recording. Your timing, your tempo.  Registration and more information Visit architects.oraevents.eu to sign up for the recorded sessions. NOTE: Looking to get your consultants Oracle certified? One more reason to join the Oracle Partner Architects Training. It is the fast track to getting their expertise validated with an Oracle certificate. Training schedule  Choose from any of the 40 recorded training sessions: SECURITY THE PRACTICAL APPROACH •  Identity governance• Access management• Data privacy and protection• End-to-end security, layers of exposures•  Identity & access management, why and where to start?• Data security, how? SERVICE INTEGRATION A NEW ROADTO ENTERPRISE-WIDE SERVICE INTEGRATION • Oracle RUEI: maximize business value by insight into real end-user experiences•  Governance challenges in the services landscape•  Creating an agile enterprise (by Jeff Davies)• Oracle’s approach to SOA (by Jeff Davies) - guiding and accelerating SOA success• Technical case study – the SOA challenge• Oracle’s unified business process management suite 11g (incl. demo) DATABASE DATABASE AND OPTIONS, GOINGWIDE •  Understanding service level agreements for databases• Database lifecycle management• Data centric information lifecycle management DATA INTEGRATION  DIS FOR ARCHITECTS • Data integration solutions: an overview• ODI and goldengate• Data quality

    Read the article

  • need some concrete examples on user stories, tasks and how they relate to functional and technical specifications

    - by gideon
    Little heads up, Im the only lonely dev building/planning/mocking my project as I go. Ive come up with a preview release that does only the core aspects of the system, with good business value and I've coded most of the UI as dirty throw-able mockups over nicely abstracted and very minimal base code. In the end I know quite well what my clients want on the whole. I can't take agile-ish cowboying anymore because Im completely dis-organized and have no paper plan and since my clients are happy, things are getting more complex with more features and ideas. So I started using and learning Agile & Scrum Here are my problems: I know what a functional spec is.(sample): Do all user stories and/or scenarios become part of the functional spec? I know what user stories and tasks are. Are these kinda user stories? I dont see any Business Value reason added to them. I made a mind map using freemind, I had problems like this: Actor : Finance Manager Can Add a Financial Plan into the system because well thats the point of it? What Business Value reason do I add for things like this? Example : A user needs to be able to add a blog article (in the blogger app) because..?? Its the point of a blogger app, it centers around that feature? How do I go into the finer details and system definitions: Actor: Finance Manager Action: Adds a finance plan. This adding is a complicated process with lots of steps. What User Story will describe what a finance plan in the system is ?? I can add it into the functional spec under definitions explaining what a finance plan is and how one needs to add it into the system, but how do I get to the backlog planning from there? Example: A Blog Article is mostly a textual document that can be written in rich text in the system. To add a blog article one must...... But how do you create backlog list/features out of this? Where are the user stories for what a blog article is and how one adds/removes it? Finally, I'm a little confused about the relations between functional specs and user stories. Will my spec contain user stories in them with UI mockups? Now will these user stories then branch out tasks which will make up something like a technical specification? Example : EditorUser Can add a blog article. Use XML to store blog article. Add a form to add blog. Add Windows Live Writer Support. That would be agile tasks but would that also be part of/or form the technical specs? Some concrete examples/answers of my questions would be nice!!

    Read the article

  • Backing up SQL NetApp Snapshots using TSM

    - by WerkkreW
    In our environment we have a 3 node SQL 2005 Cluster which is on NetApp storage. We are currently using SMSQL (NetApp SnapManager for SQL) to take Snapshot backups of the data. This works great, but due to some audit requirements we are also forced to maintain some copies on tape. We have used NDMP in other places across the enterprise but we do not want to use it in this specific instance. Basically what I need to do is, get the most recent snapshot copy of the databases on tape, via Tivoli Storage Manager (TSM). What I have done is, obtained a basic Windows Server 2003 VM with SnapDrive installed, which is SAN attached and zoned to the NetApp, and I have written a batch file to do the following: Mount the latest __RECENT snapshot lun to the host, using a specific drive letter Perform a TSM based incremental backup Dis-mount the LUN This seems to work fine, except sometimes the LUN's do not mount due to some sort of timeout. Also, due to my limited knowledge of windows batch scripting, I have no way to monitor the success or failure of these backups since I do not know how to send a valid return code back to the TSM scheduling service. Is there a more efficient/elegant way to accomplish this without NDMP?

    Read the article

  • SQL Server 2008 R2 - Cannot create database snapshot

    - by Chris Diver
    Server: Windows Server 2008 R2 X64 Enterprise SQL: SQL Server 2008 R2 Enterprise X64 I have a default SQL Server instance, the SQL Server service account is running as a domain user. I am trying to create a database snapshot in the directory where the mdf files are stored. The T-SQL syntax is correct. The file system is NTFS. The error message I get is: Msg 1823, Level 16, State 2, Line 1 A database snapshot cannot be created because it failed to start. Msg 5119, Level 16, State 1, Line 1 Cannot make the file "e:\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\TestDB.ss" a sparse file. Make sure the file system supports sparse files. The local SQLServerMSSQLUser$db$MSSQLSERVER group has Full Control permission on the folder where I am trying to create the snapshot. I can fix the problem in two ways, neither of which are suitable. Add the SQL Server service (domain) account to the local Administrators group and restart the SQL service. Grant the local SQLServerMSSQLUser$db$MSSQLSERVER group Full control on E:\ I have tried to change the owner of the DATA directory to SQLServerMSSQLUser$db$MSSQLSERVER to no avail. I have no issue creating a new database Why can I not create a snapshot by giving permission only on the DATA folder? Update 23/09/2010: I have tried mrdenny's suggestion with no luck (but learned something new in the process), I suspect the problem may be due to the fact that the domain is a windows 2000 domain running in mixed mode. I had to install hotfix KB976494 for Server 2008 R2, as the SQL Server 2008 R2 installer would not verify the service account correctly with the domain. I noticed that Server 2000 isn't a supported operating system for SQL 2008 R2 but cannot find anything that would suggest it shouldn't work in a 2000 domain. I dis-joined the test server from the domain and changed the service accounts to the local service account and I still have the same issue. I will try to re-install the server without joining the domain and without the hotfix and see if the issue persists.

    Read the article

  • Tell postfix to merge three Authentication-Results:-Lines into one?

    - by Peter
    I am running a postfix mta with debian wheezy. I am using postfix-policyd-spf-python, openkdim and opendmarc. When receiving e-mails from google (google apps with own domain) for example, the header looks like this: [...] Authentication-Results: mail.xx.de; dkim=pass reason="1024-bit key; insecure key" header.d=yyy.com [email protected] header.b=OswLe0N+; dkim-adsp=pass; dkim-atps=neutral<br> [...] Authentication-Results: mail.xx.de; spf=pass (sender SPF authorized) smtp.mailfrom=yyy.com (client-ip=2a00:1450:400c:c00::242; helo=mail-wg0-x242.google.com; [email protected]; [email protected]) [...] Authentication-Results: mail.xx.de; dmarc=pass header.from=yyy.com<br> [...] This means any of these programs creates it's own Authentication-Results:-Line. Is it possible to tell postfix to merge this into one single Authentication-Results:-Line? When I send an e-mail to google, it says: [...] Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates xxx.xxx.xxx.xxx as permitted sender) [email protected]; dkim=pass [email protected]; dmarc=pass (p=NONE dis=NONE) header.from=xxx.com [...] And this is exactly what I want. Just one Authentication-Results-Header. How can I do this? Thanks. Regards, Peter

    Read the article

  • How to block/avoid a particular IP when connecting to websites?

    - by Mark
    I'm having trouble connecting to a particular website. I can view it through a proxy, but not from home. So I ran a traceroute: Tracing route to fvringette.com [76.74.225.90] over a maximum of 30 hops: 1 <1 ms <1 ms <1 ms <snip> 2 * * * Request timed out. 3 9 ms 7 ms 27 ms rd2bb-ge2-0-0-22.vc.shawcable.net [64.59.146.226] 4 8 ms 7 ms 7 ms rc2bb-tge0-9-2-0.vc.shawcable.net [66.163.69.41] 5 10 ms 9 ms 9 ms rc2wh-tge0-0-1-0.vc.shawcable.net [66.163.69.65] 6 27 ms 23 ms 22 ms ge-gi0-2.pix.van.peer1.net [206.223.127.1] 7 18 ms 18 ms 20 ms 10ge.xe-0-2-0.van-spenc-dis-1.peer1.net [216.187.89.206] 8 9 ms 11 ms 10 ms 64.69.91.245 9 * * * Request timed out. 10 * * * Request timed out. ... Looks like this "64.69.91.245" is somehow blocking me. Can I tell my computer to avoid/bypass that IP when trying to connect?

    Read the article

  • Backing up SQL NetApp Snapshots using TSM

    - by WerkkreW
    In our environment we have a 3 node SQL 2005 Cluster which is on NetApp storage. We are currently using SMSQL (NetApp SnapManager for SQL) to take Snapshot backups of the data. This works great, but due to some audit requirements we are also forced to maintain some copies on tape. We have used NDMP in other places across the enterprise but we do not want to use it in this specific instance. Basically what I need to do is, get the most recent snapshot copy of the databases on tape, via Tivoli Storage Manager (TSM). What I have done is, obtained a basic Windows Server 2003 VM with SnapDrive installed, which is SAN attached and zoned to the NetApp, and I have written a batch file to do the following: Mount the latest __RECENT snapshot lun to the host, using a specific drive letter Perform a TSM based incremental backup Dis-mount the LUN This seems to work fine, except sometimes the LUN's do not mount due to some sort of timeout. Also, due to my limited knowledge of windows batch scripting, I have no way to monitor the success or failure of these backups since I do not know how to send a valid return code back to the TSM scheduling service. Is there a more efficient/elegant way to accomplish this without NDMP?

    Read the article

  • How to eliminate the domain suffix off my user profile folder when migrating to a new domain?

    - by Jerry Dodge
    We have just upgraded a decade old SBS 2003 server to a brand new SBS 2011 machine. During the process, over 30 other client/server machines on that domain also needed to be dis-joined and re-joined from the old domain to the new one. These domains have different names and is not migrated in any way. It's built from scratch. Since each client machine had very unique user profiles under this domain, we needed to make sure these were all backed up and migrated over to the new domain. For the most part, profiles were migrated with no hassle, just by renaming the user profile folder names. However, in one case, when I log in to my domain account, it creates a profile folder with a suffix of the new domain name. I have replaced all the files in the profile's root which begin with "ntuser" with the files of the new profile. The only problem is half the applications can't find their data, because the folder name is different. How can I change this folder name and maintain this profile on the new domain? I have deleted every user account (except admin), deleted their profiles/folders, removed them from the registry, and made sure every trace of this account was gone. The computer was basically a dummy with only an admin account. Then, I log into the machine under my new domain user account (same username as the old domain). It creates a profile folder with my username plus a suffix extension of the new domain name. The client machine is Windows 7 Ultimate, the old server was SBS 2003, and the new server is SBS 2011.

    Read the article

  • Windows 7 not booting up and stuck at startup repair

    - by mikimr
    I've been having issues with Windows 7 Home Premium on a Lenovo laptop. At first, it would not start up normally at all. I started it in Safe Mode, where I disabled all non-MS services and tried again to no avail. It then goes into Startup repair where it failed several times. I tried copying the original registry settings, still the same. I resorted to booting with an Ubuntu DVD, where I ran the boot-repair, where it is supposed to correct the Windows boot. No luck. I used Win7 DVD to start up from there, where I had the option to install or repair. I chose the repair, got into command prompt, ran chkdsk /i /r, where it found 3 unreadable segments, went through the 2nd step without issues, and the 3rd step completed with some errors (can't recall the exact errors). When I restarted the machine, it went to straight to the Stratup Repair, indicating "Attempting repairs... Repairing dis errors. This might take over an hour to complete." It's been like this for nearly 15 hours. When I try to cancel or close the Startup Repair window, I get a message "The current repair operation cannot be cancelled." Should I let it run or force shut the machine? If force shut, how can I resolve this problem? Thanks.

    Read the article

  • Oracle Data Integration 12c: Simplified, Future-Ready, High-Performance Solutions

    - by Thanos Terentes Printzios
    In today’s data-driven business environment, organizations need to cost-effectively manage the ever-growing streams of information originating both inside and outside the firewall and address emerging deployment styles like cloud, big data analytics, and real-time replication. Oracle Data Integration delivers pervasive and continuous access to timely and trusted data across heterogeneous systems. Oracle is enhancing its data integration offering announcing the general availability of 12c release for the key data integration products: Oracle Data Integrator 12c and Oracle GoldenGate 12c, delivering Simplified and High-Performance Solutions for Cloud, Big Data Analytics, and Real-Time Replication. The new release delivers extreme performance, increase IT productivity, and simplify deployment, while helping IT organizations to keep pace with new data-oriented technology trends including cloud computing, big data analytics, real-time business intelligence. With the 12c release Oracle becomes the new leader in the data integration and replication technologies as no other vendor offers such a complete set of data integration capabilities for pervasive, continuous access to trusted data across Oracle platforms as well as third-party systems and applications. Oracle Data Integration 12c release addresses data-driven organizations’ critical and evolving data integration requirements under 3 key themes: Future-Ready Solutions : Supporting Current and Emerging Initiatives Extreme Performance : Even higher performance than ever before Fast Time-to-Value : Higher IT Productivity and Simplified Solutions  With the new capabilities in Oracle Data Integrator 12c, customers can benefit from: Superior developer productivity, ease of use, and rapid time-to-market with the new flow-based mapping model, reusable mappings, and step-by-step debugger. Increased performance when executing data integration processes due to improved parallelism. Improved productivity and monitoring via tighter integration with Oracle GoldenGate 12c and Oracle Enterprise Manager 12c. Improved interoperability with Oracle Warehouse Builder which enables faster and easier migration to Oracle Data Integrator’s strategic data integration offering. Faster implementation of business analytics through Oracle Data Integrator pre-integrated with Oracle BI Applications’ latest release. Oracle Data Integrator also integrates simply and easily with Oracle Business Analytics tools, including OBI-EE and Oracle Hyperion. Support for loading and transforming big and fast data, enabled by integration with big data technologies: Hadoop, Hive, HDFS, and Oracle Big Data Appliance. Only Oracle GoldenGate provides the best-of-breed real-time replication of data in heterogeneous data environments. With the new capabilities in Oracle GoldenGate 12c, customers can benefit from: Simplified setup and management of Oracle GoldenGate 12c when using multiple database delivery processes via a new Coordinated Delivery feature for non-Oracle databases. Expanded heterogeneity through added support for the latest versions of major databases such as Sybase ASE v 15.7, MySQL NDB Clusters 7.2, and MySQL 5.6., as well as integration with Oracle Coherence. Enhanced high availability and data protection via integration with Oracle Data Guard and Fast-Start Failover integration. Enhanced security for credentials and encryption keys using Oracle Wallet. Real-time replication for databases hosted on public cloud environments supported by third-party clouds. Tight integration between Oracle Data Integrator 12c and Oracle GoldenGate 12c and other Oracle technologies, such as Oracle Database 12c and Oracle Applications, provides a number of benefits for organizations: Tight integration between Oracle Data Integrator 12c and Oracle GoldenGate 12c enables developers to leverage Oracle GoldenGate’s low overhead, real-time change data capture completely within the Oracle Data Integrator Studio without additional training. Integration with Oracle Database 12c provides a strong foundation for seamless private cloud deployments. Delivers real-time data for reporting, zero downtime migration, and improved performance and availability for Oracle Applications, such as Oracle E-Business Suite and ATG Web Commerce . Oracle’s data integration offering is optimized for Oracle Engineered Systems and is an integral part of Oracle’s fast data, real-time analytics strategy on Oracle Exadata Database Machine and Oracle Exalytics In-Memory Machine. Oracle Data Integrator 12c and Oracle GoldenGate 12c differentiate the new offering on data integration with these many new features. This is just a quick glimpse into Oracle Data Integrator 12c and Oracle GoldenGate 12c. Find out much more about the new release in the video webcast "Introducing 12c for Oracle Data Integration", where customer and partner speakers, including SolarWorld, BT, Rittman Mead will join us in launching the new release. Resource Kits Meet Oracle Data Integration 12c  Discover what's new with Oracle Goldengate 12c  Oracle EMEA DIS (Data Integration Solutions) Partner Community is available for all your questions, while additional partner focused webcasts will be made available through our blog here, so stay connected. For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Stay Connected Oracle Newsletters

    Read the article

  • How to create a Global Rule that stores a document’s folder path in a custom metadata field

    - by Nicolas Montoya
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} How to create a Global Rule that stores a document’s folder path in a custom metadata field Efficiency purists would argue that redundancy is not necessary. In real life, we are willing to pay a price for performance –i.e. to have information at our fingertips. We have run into customers opting to store a document folder path as a document metadata field. They have their reasons, half of the ECM community will agree with them, and the other half would raise an eye brow. In the end, they are getting creative to achieve their document management goals. The below steps outlines how to create a Global Rule that would store a document’s folder path in a custom metadata field: Create a Global Rule via Configuration Manager > Rules Tab > Add Then check “Is global rule with priority”. Then check “Use rule activation condition”. The go to “Edit” and check the actions for this Script Properties: Then click OK, and the following rule activation condition will appear: Then Goto to the Fields Tab and add a Rule Field: Select the target Custom Metadata Field and click Ok, then check the “Is derived field”, then “Edit”, then go to the Custom Tab in the Script Properties window and enter the below custom script: <$if #active.dCollectionPath$> <$dprDerivedValue=#active.dCollectionPath$> <$else$> <$dprDerivedValue=#active.xCollectionIDPath$> <$endif$> For more information on the dCollectionPath property, check Section 8.2 Folder Services from the Oracle® Fusion Middleware Services Reference Guide for Oracle Universal Content Management 11g Release 1 (11.1.1) http://docs.oracle.com/cd/E21043_01/doc.1111/e11011/c08_folders002.htm The above rule will keep the Custom Metadata Field updated with the Folder Path information when a document is checked in via the Content Server (CS) Web Interface or the Desktop Integration Suite (DIS).

    Read the article

  • Big Data – Operational Databases Supporting Big Data – Columnar, Graph and Spatial Database – Day 14 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Key-Value Pair Databases and Document Databases in the Big Data Story. In this article we will understand the role of Columnar, Graph and Spatial Database supporting Big Data Story. Now we will see a few of the examples of the operational databases. Relational Databases (The day before yesterday’s post) NoSQL Databases (The day before yesterday’s post) Key-Value Pair Databases (Yesterday’s post) Document Databases (Yesterday’s post) Columnar Databases (Tomorrow’s post) Graph Databases (Today’s post) Spatial Databases (Today’s post) Columnar Databases  Relational Database is a row store database or a row oriented database. Columnar databases are column oriented or column store databases. As we discussed earlier in Big Data we have different kinds of data and we need to store different kinds of data in the database. When we have columnar database it is very easy to do so as we can just add a new column to the columnar database. HBase is one of the most popular columnar databases. It uses Hadoop file system and MapReduce for its core data storage. However, remember this is not a good solution for every application. This is particularly good for the database where there is high volume incremental data is gathered and processed. Graph Databases For a highly interconnected data it is suitable to use Graph Database. This database has node relationship structure. Nodes and relationships contain a Key Value Pair where data is stored. The major advantage of this database is that it supports faster navigation among various relationships. For example, Facebook uses a graph database to list and demonstrate various relationships between users. Neo4J is one of the most popular open source graph database. One of the major dis-advantage of the Graph Database is that it is not possible to self-reference (self joins in the RDBMS terms) and there might be real world scenarios where this might be required and graph database does not support it. Spatial Databases  We all use Foursquare, Google+ as well Facebook Check-ins for location aware check-ins. All the location aware applications figure out the position of the phone with the help of Global Positioning System (GPS). Think about it, so many different users at different location in the world and checking-in all together. Additionally, the applications now feature reach and users are demanding more and more information from them, for example like movies, coffee shop or places see. They are all running with the help of Spatial Databases. Spatial data are standardize by the Open Geospatial Consortium known as OGC. Spatial data helps answering many interesting questions like “Distance between two locations, area of interesting places etc.” When we think of it, it is very clear that handing spatial data and returning meaningful result is one big task when there are millions of users moving dynamically from one place to another place & requesting various spatial information. PostGIS/OpenGIS suite is very popular spatial database. It runs as a layer implementation on the RDBMS PostgreSQL. This makes it totally unique as it offers best from both the worlds. Courtesy: mushroom network Tomorrow In tomorrow’s blog post we will discuss about very important components of the Big Data Ecosystem – Hive. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • How to load local images in webview in Iphone sdk?

    - by monish
    Hi guys, Here I am getting a problem when I am loading local Image in the webview using html code my image was not diplaying in my webview here is my code for Loading the local image in webview: htmlString = [htmlString stringByAppendingString:@"<table width=85% cellpadding=3 cellspacing=0 border=0> "]; htmlString = [htmlString stringByAppendingString:@"<tr>"]; htmlString = [htmlString stringByAppendingString:@"<td >"]; htmlString = [htmlString stringByAppendingString:@"<div align='Right'><a href='/reviews/"]; htmlString = [htmlString stringByAppendingString:@"'>"]; htmlString = [htmlString stringByAppendingString:@"<img src ='reviews1.jpg' BORDER=0 HEIGHT=148 WIDTH=110/>"]; htmlString = [htmlString stringByAppendingString:@"</a>"]; - (BOOL)webView:(UIWebView*)webView2 shouldStartLoadWithRequest:(NSURLRequest*)request navigationType:(UIWebViewNavigationType)navigationType { //CAPTURE USER LINK-CLICK. if (navigationType == UIWebViewNavigationTypeLinkClicked) { NSURL *aURL = [request URL]; NSString* aurlString = [aURL absoluteString]; if ( [aurlString rangeOfString:@"/reviews/"].location != NSNotFound) { [UIApplication sharedApplication].networkActivityIndicatorVisible = YES; ReviewWebViewController* viewController = [[ReviewWebViewController alloc] init]; viewController.loadUrl=[techProducts techProductReviewUrl]; [self.navigationController pushViewController:viewController animated:YES]; [UIApplication sharedApplication].networkActivityIndicatorVisible = NO; [viewController release]; return NO; } } } Here no image was displaying only ?mark is displaying . guys pls help me to get out of dis prb. Thank you, Monish.

    Read the article

  • how to access a type defined in one .ml file in another .ml file

    - by user339261
    Hi, I m very new to ocaml and i m facing the problem below, In a.ml a record type t is defined and is also defined transparently in a.mli, i.e. in d interface so that the type definition is available to all other files. a.ml also has a function, func, which returns a list of t. Now in another file, b.ml i m calling func, now obviously ocaml compiler wud nt be able to infer d type of objects stored in d list, for compiler its just a list. so in b.ml, i hav something like dis, let tlist = A.func in let vart = List.hd tlist in printf "%s\n" vart.name (name is a field in record t) Now here i get a compiler error sayin "Unbound record field label name" which makes sense as compiler can't infer d type of vart. my first question: how do I explicitly provide d type of vart as t here? i tried doing "let vart:A.t = " but got the same error. I also tried creating another function to fetch the first element of d list and mentioning return type as A.t, but then i got the "Unbound value A.t". I did this: let firstt = function [] - 0 | x :: _ - A.t x ;; The problem is compiler is unable to recognize A.t (a type) in b.ml but is able to recognize function A.func. If I remove A.t from the b.ml, i don'get any compiler errors. Please help, its urgent work. Thanks in advance! ~Tarun

    Read the article

  • Return a list of imported Python modules used in a script?

    - by Jono Bacon
    Hi All, I am writing a program that categorizes a list of Python files by which modules they import. As such I need to scan the collection of .py files ad return a list of which modules they import. As an example, if one of the files I import has the following lines: import os import sys, gtk I would like it to return: ["os", "sys", "gtk"] I played with modulefinder and wrote: from modulefinder import ModuleFinder finder = ModuleFinder() finder.run_script('testscript.py') print 'Loaded modules:' for name, mod in finder.modules.iteritems(): print '%s ' % name, but this returns more than just the modules used in the script. As an example in a script which merely has: import os print os.getenv('USERNAME') The modules returned from the ModuleFinder script return: tokenize heapq __future__ copy_reg sre_compile _collections cStringIO _sre functools random cPickle __builtin__ subprocess cmd gc __main__ operator array select _heapq _threading_local abc _bisect posixpath _random os2emxpath tempfile errno pprint binascii token sre_constants re _abcoll collections ntpath threading opcode _struct _warnings math shlex fcntl genericpath stat string warnings UserDict inspect repr struct sys pwd imp getopt readline copy bdb types strop _functools keyword thread StringIO bisect pickle signal traceback difflib marshal linecache itertools dummy_thread posix doctest unittest time sre_parse os pdb dis ...whereas I just want it to return 'os', as that was the module used in the script. Can anyone help me achieve this?

    Read the article

  • Two divs with wrapping text sharing the same line

    - by Jerad Rose
    Simple problem - How do I get these two divs to share the same line: <div style="width: 200px; padding: 0; background-color: #f00; float: left; display: inline; ">Lorem ipsum dolor sit</div> <div style="margin-left: 200px; padding: 0; background-color: #0f0; float: right; display: inline; ">Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Duis interdum leo nec purus eleifend ut laoreet metus varius. Praesent lobortis risus sem. Duis gravida risus convallis purus dapibus fermentum. Nulla nec arcu pellentesque justo hendrerit pulvinar id ac velit. Nulla cursus volutpat risus, id volutpat metus tempus eget. Morbi rhoncus, diam sed vestibulum elementum, odio nulla faucibus ligula, ut dapibus lorem nunc vitae purus. Nam commodo iaculis ultricies. Etiam in velit dolor, vel convallis tellus. Aliquam tincidunt, erat ac dictum varius, sapien mi faucibus est, sit amet venenatis nisl massa non turpis. Donec eget libero mauris. Cras ac magna est, id hendrerit est.</div> Thanks in advance.

    Read the article

  • Return a list of important Python modules in a script?

    - by Jono Bacon
    Hi All, I am writing a program that categorizes a list of Python files by which modules they import. As such I need to scan the collection of .py files ad return a list of which modules they import. As an example, if one of the files I import has the following lines: import os import sys, gtk I would like it to return: ["os", "sys", "gtk"] I played with modulefinder and wrote: from modulefinder import ModuleFinder finder = ModuleFinder() finder.run_script('testscript.py') print 'Loaded modules:' for name, mod in finder.modules.iteritems(): print '%s ' % name, but this returns more than just the modules used in the script. As an example in a script which merely has: import os print os.getenv('USERNAME') The modules returned from the ModuleFinder script return: tokenize heapq __future__ copy_reg sre_compile _collections cStringIO _sre functools random cPickle __builtin__ subprocess cmd gc __main__ operator array select _heapq _threading_local abc _bisect posixpath _random os2emxpath tempfile errno pprint binascii token sre_constants re _abcoll collections ntpath threading opcode _struct _warnings math shlex fcntl genericpath stat string warnings UserDict inspect repr struct sys pwd imp getopt readline copy bdb types strop _functools keyword thread StringIO bisect pickle signal traceback difflib marshal linecache itertools dummy_thread posix doctest unittest time sre_parse os pdb dis ...whereas I just want it to return 'os', as that was the module used in the script. Can anyone help me achieve this?

    Read the article

  • Python alignment of assignments (style)

    - by ikaros45
    I really like following style standards, as those specified in PEP 8. I have a linter that checks it automatically, and definitely my code is much better because of that. There is just one point in PEP 8, the E251 & E221 don't feel very good. Coming from a JavaScript background, I used to align the variable assignments as following: var var1 = 1234; var2 = 54; longer_name = 'hi'; var lol = { 'that' : 65, 'those' : 87, 'other_thing' : true }; And in my humble opinion, this improves readability dramatically. Problem is, this is dis-recommended by PEP 8. With dictionaries, is not that bad because spaces are allowed after the colon: dictionary = { 'something': 98, 'some_other_thing': False } I can "live" with variable assignments without alignment, but what I don't like at all is not to be able to pass named arguments in a function call, like this: some_func(length= 40, weight= 900, lol= 'troll', useless_var= True, intelligence=None) So, what I end up doing is using a dictionary, as following: specs = { 'length': 40, 'weight': 900, 'lol': 'troll', 'useless_var': True, 'intelligence': None } some_func(**specs) or just simply some_func(**{'length': 40, 'weight': 900, 'lol': 'troll', 'useless_var': True, 'intelligence': None}) But I have the feeling this work around is just worse than ignoring the PEP 8 E251 / E221. What is the best practice?

    Read the article

  • C++ IDE for Linux?

    - by Sven
    I want to expand my programming horizons to Linux. A good, dependable basic toolset is important, and what is more basic than an IDE? I could find these SO topics: Lightweight IDE for linux and What tools do you use to develop C++ applications on Linux? I'm not looking for a lightweight IDE. If an IDE is worth the money, then I will pay for it, so it need not be free. My question, then: What good, C++ programming IDE is available for Linux? The minimums are fairly standard: syntax highlighting, code completion (like intellisense or its Eclipse counterpart) and integrated debugging (e.g., basic breakpoints). I have searched for it myself, but there are so many that it is almost impossible to separate the good from the bads by hand, especially for someone like me who has little C++ coding experience in Linux. I know that Eclipse supports C++, and I really like that IDE for Java, but is it any good for C++ and is there something better? The second post actually has some good suggestions, but what I am missing is what exactly makes the sugested IDE so good for the user, what are its (dis)advantages? Maybe my question should therefore be: What IDE do you propose (given your experiences), and why?

    Read the article

  • ASP disable button during postback

    - by user667430
    I have a button which i am trying to add a css class to appear disabled once the user has clicked it. protected void Button_Continue_OnClick(object sender, EventArgs e) { Panel_Error.Visible = false; Literal_Error.Text = ""; if (RadioButton_Yes.Checked) { ...//if radio checked get text and process etc. } } My button onlick is above which simply processes a textbox filled on the page. My button looks like this: <asp:Button runat="server" ID="Button_Continue" CssClass="button dis small" Text="Continue" OnClick="Button_Continue_OnClick" ClientIDMode="Static" OnClientClick="return preventMult();" /> And my javscript is as follows: <script type="text/javascript"> var isSubmitted = false; function preventMult() { if (isSubmitted == false) { $('#Button_Continue').removeClass('ready'); $('#Button_Continue').addClass('disabled'); isSubmitted = true; return true; } else { return false; } } </script> The problem I am having is that the css class added works fine on the first postback, but after which my button onclick doesnt work and the button cant be clicked again if the user needs to resubmit the data if it is wrong Another problem I am having is that with a breakpoint in my method i notice that the method is fired twice on the click.

    Read the article

  • Internet Working, Browsing Not.

    - by jeffreypriebe
    I have a very odd problem that I can't resolve. I am connected to the internet, but my browsing doesn't work. I don't mean a web browser - I mean browsing. Firefox, Chrome, Curl all fail to successfully connect to an HTTP address. However existing connections, e.g. to mail in Outlook (Exchange Server and also IMAP server) continue to work. Also, the internet is on, I can confirm both from my machine (other ports / connections) as well as from any other computer connected to the same network. Additionally, it appears to be HTTP, not simple a port issue as HTTP over port 8443 (Tortoise SVN if you must know - running over HTTP not over SVN) also fails. I am using Windows Vista SP2 (build 6002). It seems to "creep up" in that after running the computer for a few hours it will fail. (No found way to systematically reproduce the problem.) Additionally, it seems to be more prone on days where the internet connection is flaky already (not sure why the internet is flaky, just is, lot's of failed browsing requests and have to retry/reload often). What I have tried (when the problem arises) - none have yielded any resolution: Resetting the network connection (dis-connect, re-connect) Disable/re-enable the network adapter Double-checked the ip settings Double-checked the HOSTS file. Note: DNS continues to work (both new and cached responses to DNS queries). (Thanks for the suggestion Daniel and antenore.) Checked the routing tables (ip4 only as ipv6 is beyond my understanding) resetting all involved hardware (routers and modems) Close and reopen browsers Looked for malware interference: Run HijackThis Looked for suspicious processes using SysInternals procexp. Looked for explorer hijacks, lsa provider interference, winsock provider interference using SysInternals Autoruns. Run a complete anti-virus scan. Reviewed the output of a netstat -onab to see if there were stuck ports open or unusual processes running somewhere The only thing that works is to do a full reboot. That works 100% of the time to restore browsing. What else can I try to nail down the problem?

    Read the article

< Previous Page | 2 3 4 5 6 7  | Next Page >