Search Results

Search found 3416 results on 137 pages for 'dba alex'.

Page 111/137 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • Contiguous Time Periods

    It is always more efficient to maintain referential integrity by using constraints rather than triggers. Sometimes it isn't obvious how to do this. Until a recent idea by Alex Kuznetsov, the history table presented problems for checking data that were difficult to solve with constraints. Joe Celko explains. Free trial of SQL Backup™“SQL Backup was able to cut down my backup time significantly AND achieved a 90% compression at the same time!” Joe Cheng. Download a free trial now.

    Read the article

  • Cannot add folders to Ubuntu One

    - by Akmur
    I have signed up with Ubuntu One (20GB) and I'm having the following issue: I basically cannot add folders through the panel interface. I have been able to add five folders so far, but I'd like to add some more (yes, they are inside the home folder). I don't get no errors, but nothing is added to the folders list. By using the command line interface like this u1sdtool --create-folder /home/alex/Web it basically hangs. Nothing happens. If I then list the folders with command line, my folder is not there. Any idea? (I'm on 12.10)

    Read the article

  • Question about whether I should pursue minors or work towards a masters [closed]

    - by user75493
    I'm a sophomore about to go into my Spring semester in a few weeks and I am trying to decide some major things that will affect the next few years of my life and was looking for some guidance/advice. I'm currently working towards a Bachelors in Computer Science and my question was whether I should minor in Corporate Strategy and Math or just work towards a Masters in Computer Science? I was wondering if employers are more looking for Masters degrees in Computer Science or if those minors could be beneficial to me. Some things that may affect responses: I already have all my other requirements filled for graduating except my Computer Science classes, I've already had an internship this past summer and am already hearing back from several companies about internships for this upcoming summer. Thanks in advance! - Alex

    Read the article

  • A relatively new blog seems to be getting very poor Google indexing

    - by Genadinik
    I have a new blog that is 2 months old. In the first few weeks, it was getting indexed nicely and my GoogleWebmaster reports were showing that it was getting crawled and began ranking for some terms. Then as I kept writing, the GoogleWebmaster report thinned out and showed less and less terms that this blog ranks for. Now there are only 4 terms with one of them being my name. Is there something I need to do to keep the old posts to remain indexed and crawled? Thanks, Alex

    Read the article

  • “????Java”???!?JavaOne Tokyo 2012?

    - by hideki ito
    2012?4?4??5??2????????????????49???JavaOne Tokyo 2012??????????7???4??????????????2011???????????Moving Java Forward?????“????Java”???????Java????????????????????2?????????(4?4????????) 2??????????????! ?????2????JavaOne Technical Keynote????????????????????????·?????????????????????Java??????????????????????? ??????????????????Java Rap?(!?) ??????????????·???????? Java???????????????????????·????Alex Buckley?Project Coin???????? ???????·???????? Java???·??????·???????Richard Bair?JavaFX???????????????Oracle Corporation ????·???????? Java EE?????????Java???????Mike Keith?Java EE???????????????????????????????? ????&??????????·??????????Terrence Barr??Java ME?Java Embedded??????????????????????? 2??????????????????????????????????????????? ????????49??????????????????????????????????????Java?????????????????????????? ??????????????????????????????????????????????????????? ?????????Duke??????????????????????????????????…? 7????????????JavaOne Tokyo 2012??????????Java???????????????????????2?????????2012?9?30???10?4????????????????JavaOne 2012????????????????Java????????????????????????????????????????????????! JavaOne 2012 San Francisco http://www.oracle.com/javaone/index.html

    Read the article

  • oracle query with inconsistent results

    - by Spencer Stejskal
    Im having a very strange problem, i have a complicated view that returns incorrect data when i query on a particular column. heres an example: select empname, has_garnishment from timecard_v2 where empname = 'Testerson, Testy'; this returns the single result 'Testerson, Testy', 'N' however, if i use the query: select empname, has_garnishment from timecard_v2 where empname = 'Testerson, Testy' and has_garnishment = 'Y'; this returns the single result 'Testerson, Testy', 'Y' The second query should return a subset of the first query, but it returns a different answer. I have dissected the view and determined that this section of the view definition is where the problem arises(Note, I removed all of the select clause except the parts of interests for clarity, in the full query all joined tables are required): SELECT e.fullname empname , NVL2(ded.has_garn, 'Y', 'N') has_garnishment FROM timecard tc , orderdetail od , orderassign oa , employee e , employee3 e3 , customer10 c10 , order_misc om, (SELECT COUNT(*) has_garn, v_ssn FROM deductions WHERE yymmdd_stop = 0 OR (LENGTH(yymmdd_stop) = 7 AND to_date(SUBSTR(yymmdd_stop, 2), 'YYMMDD') sysdate) GROUP BY v_ssn ) ded WHERE oa.lrn(+) = tc.lrn_order AND om.lrn(+) = od.lrn AND od.orderno = oa.orderno AND e.ssn = tc.ssn AND c10.custno = tc.custno AND e.lrn = e3.lrn AND e.ssn = ded.v_ssn(+) One thing of note about the definition of the 'ded' subquery. The v_ssn field is a virtual field on the deductions table. I am not a DBA im a software developer but we recently lost our DBA and the new one is still getting up to speed so im trying to debug this issue. That being said, please explain things a little more thoroughly then you would for a fellow oracle expert. thanks

    Read the article

  • Login as SYS user to Oracle 11g from .NET

    - by Jens Bannmann
    Using the Oracle Data Provider for .NET, my application connects to the database using the privileged SYS user. The connection string is as follows: Data Source=MyTnsName;User ID=sys;Password=MySysPassword;DBA Privilege=SYSDBA This works fine with Oracle 10, but Oracle 11 keeps complaining about an invalid username or password. I verified that the password is correct - other apps work fine with the same credentials. Note that for regular users (without the DBA Privilege part), connecting to Oracle 11 works perfectly. So, what's wrong? Update: This is not an issue with case sensitivity - when constructing the connection string, the password case is not altered by my code, and the password works fine with other, non-.NET-applications. I suspect that this might be caused by the Oracle 10 client I'm using to connect to the 11 database. Oracle states that the client is upward-compatible, the only drawback being that you cannot use some new features of the database. However, SYSDBA connections clearly are not a new Oracle 11 feature, and - again - a non-.NET-app (Keeptool Hora) can connect using the same setup. Any other ideas? Update 2: The problem persists when using an Oracle 11 client :-(

    Read the article

  • Is this a bad indexing strategy for a table?

    - by llamaoo7
    The table in question is part of a database that a vendor's software uses on our network. The table contains metadata about files. The schema of the table is as follows Metadata ResultID (PK, int, not null) MappedFieldname (char(50), not null) Fieldname (PK, char(50), not null) Fieldvalue (text, null) There is a clustered index on ResultID and Fieldname. This table typically contains millions of rows (in one case, it contains 500 million). The table is populated by 24 workers running 4 threads each when data is being "processed". This results in many non-sequential inserts. Later after processing, more data is inserted into this table by some of our in-house software. The fragmentation for a given table is at least 50%. In the case of the largest table, it is at 90%. We do not have a DBA. I am aware we desperately need a DB maintenance strategy. As far as my background, I'm a college student working part time at this company. My question is this, is a clustered index the best way to go about this? Should another index be considered? Are there any good references for this type and similar ad-hoc DBA tasks?

    Read the article

  • How to Load Oracle Tables From Hadoop Tutorial (Part 5 - Leveraging Parallelism in OSCH)

    - by Bob Hanckel
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Using OSCH: Beyond Hello World In the previous post we discussed a “Hello World” example for OSCH focusing on the mechanics of getting a toy end-to-end example working. In this post we are going to talk about how to make it work for big data loads. We will explain how to optimize an OSCH external table for load, paying particular attention to Oracle’s DOP (degree of parallelism), the number of external table location files we use, and the number of HDFS files that make up the payload. We will provide some rules that serve as best practices when using OSCH. The assumption is that you have read the previous post and have some end to end OSCH external tables working and now you want to ramp up the size of the loads. Using OSCH External Tables for Access and Loading OSCH external tables are no different from any other Oracle external tables.  They can be used to access HDFS content using Oracle SQL: SELECT * FROM my_hdfs_external_table; or use the same SQL access to load a table in Oracle. INSERT INTO my_oracle_table SELECT * FROM my_hdfs_external_table; To speed up the load time, you will want to control the degree of parallelism (i.e. DOP) and add two SQL hints. ALTER SESSION FORCE PARALLEL DML PARALLEL  8; ALTER SESSION FORCE PARALLEL QUERY PARALLEL 8; INSERT /*+ append pq_distribute(my_oracle_table, none) */ INTO my_oracle_table SELECT * FROM my_hdfs_external_table; There are various ways of either hinting at what level of DOP you want to use.  The ALTER SESSION statements above force the issue assuming you (the user of the session) are allowed to assert the DOP (more on that in the next section).  Alternatively you could embed additional parallel hints directly into the INSERT and SELECT clause respectively. /*+ parallel(my_oracle_table,8) *//*+ parallel(my_hdfs_external_table,8) */ Note that the "append" hint lets you load a target table by reserving space above a given "high watermark" in storage and uses Direct Path load.  In other doesn't try to fill blocks that are already allocated and partially filled. It uses unallocated blocks.  It is an optimized way of loading a table without incurring the typical resource overhead associated with run-of-the-mill inserts.  The "pq_distribute" hint in this context unifies the INSERT and SELECT operators to make data flow during a load more efficient. Finally your target Oracle table should be defined with "NOLOGGING" and "PARALLEL" attributes.   The combination of the "NOLOGGING" and use of the "append" hint disables REDO logging, and its overhead.  The "PARALLEL" clause tells Oracle to try to use parallel execution when operating on the target table. Determine Your DOP It might feel natural to build your datasets in Hadoop, then afterwards figure out how to tune the OSCH external table definition, but you should start backwards. You should focus on Oracle database, specifically the DOP you want to use when loading (or accessing) HDFS content using external tables. The DOP in Oracle controls how many PQ slaves are launched in parallel when executing an external table. Typically the DOP is something you want to Oracle to control transparently, but for loading content from Hadoop with OSCH, it's something that you will want to control. Oracle computes the maximum DOP that can be used by an Oracle user. The maximum value that can be assigned is an integer value typically equal to the number of CPUs on your Oracle instances, times the number of cores per CPU, times the number of Oracle instances. For example, suppose you have a RAC environment with 2 Oracle instances. And suppose that each system has 2 CPUs with 32 cores. The maximum DOP would be 128 (i.e. 2*2*32). In point of fact if you are running on a production system, the maximum DOP you are allowed to use will be restricted by the Oracle DBA. This is because using a system maximum DOP can subsume all system resources on Oracle and starve anything else that is executing. Obviously on a production system where resources need to be shared 24x7, this can’t be allowed to happen. The use cases for being able to run OSCH with a maximum DOP are when you have exclusive access to all the resources on an Oracle system. This can be in situations when your are first seeding tables in a new Oracle database, or there is a time where normal activity in the production database can be safely taken off-line for a few hours to free up resources for a big incremental load. Using OSCH on high end machines (specifically Oracle Exadata and Oracle BDA cabled with Infiniband), this mode of operation can load up to 15TB per hour. The bottom line is that you should first figure out what DOP you will be allowed to run with by talking to the DBAs who manage the production system. You then use that number to derive the number of location files, and (optionally) the number of HDFS data files that you want to generate, assuming that is flexible. Rule 1: Find out the maximum DOP you will be allowed to use with OSCH on the target Oracle system Determining the Number of Location Files Let’s assume that the DBA told you that your maximum DOP was 8. You want the number of location files in your external table to be big enough to utilize all 8 PQ slaves, and you want them to represent equally balanced workloads. Remember location files in OSCH are metadata lists of HDFS files and are created using OSCH’s External Table tool. They also represent the workload size given to an individual Oracle PQ slave (i.e. a PQ slave is given one location file to process at a time, and only it will process the contents of the location file.) Rule 2: The size of the workload of a single location file (and the PQ slave that processes it) is the sum of the content size of the HDFS files it lists For example, if a location file lists 5 HDFS files which are each 100GB in size, the workload size for that location file is 500GB. The number of location files that you generate is something you control by providing a number as input to OSCH’s External Table tool. Rule 3: The number of location files chosen should be a small multiple of the DOP Each location file represents one workload for one PQ slave. So the goal is to keep all slaves busy and try to give them equivalent workloads. Obviously if you run with a DOP of 8 but have 5 location files, only five PQ slaves will have something to do and the other three will have nothing to do and will quietly exit. If you run with 9 location files, then the PQ slaves will pick up the first 8 location files, and assuming they have equal work loads, will finish up about the same time. But the first PQ slave to finish its job will then be rescheduled to process the ninth location file, potentially doubling the end to end processing time. So for this DOP using 8, 16, or 32 location files would be a good idea. Determining the Number of HDFS Files Let’s start with the next rule and then explain it: Rule 4: The number of HDFS files should try to be a multiple of the number of location files and try to be relatively the same size In our running example, the DOP is 8. This means that the number of location files should be a small multiple of 8. Remember that each location file represents a list of unique HDFS files to load, and that the sum of the files listed in each location file is a workload for one Oracle PQ slave. The OSCH External Table tool will look in an HDFS directory for a set of HDFS files to load.  It will generate N number of location files (where N is the value you gave to the tool). It will then try to divvy up the HDFS files and do its best to make sure the workload across location files is as balanced as possible. (The tool uses a greedy algorithm that grabs the biggest HDFS file and delegates it to a particular location file. It then looks for the next biggest file and puts in some other location file, and so on). The tools ability to balance is reduced if HDFS file sizes are grossly out of balance or are too few. For example suppose my DOP is 8 and the number of location files is 8. Suppose I have only 8 HDFS files, where one file is 900GB and the others are 100GB. When the tool tries to balance the load it will be forced to put the singleton 900GB into one location file, and put each of the 100GB files in the 7 remaining location files. The load balance skew is 9 to 1. One PQ slave will be working overtime, while the slacker PQ slaves are off enjoying happy hour. If however the total payload (1600 GB) were broken up into smaller HDFS files, the OSCH External Table tool would have an easier time generating a list where each workload for each location file is relatively the same.  Applying Rule 4 above to our DOP of 8, we could divide the workload into160 files that were approximately 10 GB in size.  For this scenario the OSCH External Table tool would populate each location file with 20 HDFS file references, and all location files would have similar workloads (approximately 200GB per location file.) As a rule, when the OSCH External Table tool has to deal with more and smaller files it will be able to create more balanced loads. How small should HDFS files get? Not so small that the HDFS open and close file overhead starts having a substantial impact. For our performance test system (Exadata/BDA with Infiniband), I compared three OSCH loads of 1 TiB. One load had 128 HDFS files living in 64 location files where each HDFS file was about 8GB. I then did the same load with 12800 files where each HDFS file was about 80MB size. The end to end load time was virtually the same. However when I got ridiculously small (i.e. 128000 files at about 8MB per file), it started to make an impact and slow down the load time. What happens if you break rules 3 or 4 above? Nothing draconian, everything will still function. You just won’t be taking full advantage of the generous DOP that was allocated to you by your friendly DBA. The key point of the rules articulated above is this: if you know that HDFS content is ultimately going to be loaded into Oracle using OSCH, it makes sense to chop them up into the right number of files roughly the same size, derived from the DOP that you expect to use for loading. Next Steps So far we have talked about OLH and OSCH as alternative models for loading. That’s not quite the whole story. They can be used together in a way that provides for more efficient OSCH loads and allows one to be more flexible about scheduling on a Hadoop cluster and an Oracle Database to perform load operations. The next lesson will talk about Oracle Data Pump files generated by OLH, and loaded using OSCH. It will also outline the pros and cons of using various load methods.  This will be followed up with a final tutorial lesson focusing on how to optimize OLH and OSCH for use on Oracle's engineered systems: specifically Exadata and the BDA. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • How do you fix the Performance Dashboard datetime overfow error

    - by Mike L
    I'm a programmer/DBA by accident and we're running SQL Server 2005 with Performance Dashboard for basic monitoring. The server has been up for a few weeks and now we can't drill into certain reports. Is there any way to reset these reports without a complete reboot? edit: I bet the error message would help. I get this when I drill into the CPU graph: Error: Difference of two datetime columns caused overflow at runtime.

    Read the article

  • Install Sybase SQL Anywhere 11 as windows service

    - by student
    we are using Sybase SQL Anywhere 11. I am using command line to install/init database, dbinit -dba %username%,%pwd% -p 4k %dbLocation%, and start database server, dbsrv11 %dbLocation%, in a batch file. What I really want is install my database as Windows service and can be start/running automatically when machine get reboot. But I want to keep using batch for easy intall/uninstall/change it. Any Sybase expert here?

    Read the article

  • SQL Server Management Studio unable to connect to local instance.

    - by Ben Collins
    I'm not a DBA, I'm a developer - and I'm having trouble with SQL Server Management studio. I installed SQL Server 2008 Standard on Windows 2008 Server R2, and according to Sql Server Configuration Manager, I've got two instances: OFFICESERVERS (for sharepoint) and MSSQLSERVER. When I open SQL Server Management Studio I can only discover OFFICESERVERS. I've checked the protocol configuration for both instances and didn't see anything that indicates to me why this would be. Any hints?

    Read the article

  • Windows 2003 Server Caching

    - by pablomedok
    We're experiencing almost everyday table index corruption on Windows Server 2003. We are running an old application which uses DBF/CDX tables. Everything was fine for ages, but 6 months after we've installed Advantage Database Server (which allows access to some tables to our website) we started to get index corruption problems. And we don't know whom to blame. We've tried to exclude all possible causes of this corruption. Now all users work in terminal mode - so no network problems can cause that, OpLocks also can't be a reason. We changed hardware, network cards, switches, reainstalled Server and even moved to new dedicated server. The only thing we can't exclude is ADS - because it should be working. Is that possible that local read/write caching that causes that problem? E.g. one user or process uses cached data, later another user/process changes it, and later the first user changes it again without knowing about the first change. Is it possible theoretically? Is it possible that this problem is caused by imporper file server or caching settings? Is it possible that normal users use non-cached data and ADS is using cached data? Or vice versa? Is it possible that each terminal user has its own cache? Or maybe the problem is about RAID caching somehow interfering with Windows Server caching? Or maybe there are some special settings for Windows Server for working with DBF tables that are being written simultaneously by several terminal users? Maybe there is a way to turn off caching for some certain files to check it? Sometimes we get index crash twice a day, sometimes everything is fine for 5 days in a row. Today only one user was working in the evening with the database (usually there are 30-50 users are working simultaneously on working hours). So it's almost zero load on server. , Syncronization with website is performed every 5 minutes during work hours and every 15 minutes in the evening and on weekend. We've done file access auditing and it shows that during website syncroniztions ADS server opens the table and index files for ReadEA and WriteEA though it performs only SELECT queries. ADS does UPDATE/INSERT queries but less freqently - not during regular synchronizations, but only when an order is placed by website visitor). Please help me. We are struggling with this problem for almost a year and still can't find any pattern or any clue about this problem. Here is my previous qestion about this issue on DBA: http://dba.stackexchange.com/questions/8646/foxpro-dbf-index-corruption

    Read the article

  • Error OEM (oracle Enterprise Manager)

    - by user137520
    Dear Friends I was trying to configure OEM today. Everything seemed fine. I could login the OEM but when I tried to lauch DBA studio from there, I could not and it said no database has been discovered yet. I don't understand why since I created a database already. Has anyone experienced that? Thank You

    Read the article

  • How can I tell if my hard drive(s) have Battery Backed Write Cache?

    - by Riedsio
    How can I tell if my hard drives have a battery backed write cache (BBWC)? How can I tell if it is enabled and/or configured correctly? I don't have physical access to my server. It's a GNU/Linux box. I can provide supplemental incremental information/details as requested. My frame of reference is that of a DBA -- I have access and privileges, but (usually) only tread where I know am supposed to. :)

    Read the article

  • VS2010 Extension like Smart Paster?

    - by Eric J.
    Alex Papadimoulis' Smart Paster is a great little tool that can paste text in programmer-friendly ways (e.g. as a StringBuilder, as a language-specific string literal, etc.). However, it doesn't seem to be available for VS2010. Anyone know of a similar extension or of plans to port Smart Paster?

    Read the article

  • Perl Moose::Util::TypeConstraints bug ? what is this error about the name has invalid chars ?

    - by alex8657
    That has been hours i am tracking a Moose::Util::TypeConstraints exceptions, i don't understand where it gets to check a type and tells me that the name is incorrect. I tracked the error to a reduced example to try to locate the problem, and it just shows me that i do not get it. Did i get to a Moose::Util::TypeConstraints bug ? aoffice:new alex$ perl -c ../codesnippets/typeconstrainterror.pl ../codesnippets/typeconstrainterror.pl syntax OK aoffice:new alex$ perl -d ../codesnippets/typeconstrainterror.pl (...) DB<1> r Something::File::LocalFile=HASH(0x100d1bfa8) contains invalid characters for a type name. Names can contain alphanumeric character, ":", and "." at /opt/local/lib/perl5/vendor_perl/5.10.1/darwin-multi-2level/Moose/Util/TypeConstraints.pm line 508 Moose::Util::TypeConstraints::_create_type_constraint('Something::File::LocalFile=HASH(0x100d1bfa8)', undef, undef, undef, undef) called at /opt/local/lib/perl5/vendor_perl/5.10.1/darwin-multi-2level/Moose/Util/TypeConstraints.pm line 285 Moose::Util::TypeConstraints::type('Something::File::LocalFile=HASH(0x100d1bfa8)') called at ../codesnippets/typeconstrainterror.pl line 7 Something::File::is_slink('Something::File::LocalFile=HASH(0x100d1bfa8)') called at ../codesnippets/typeconstrainterror.pl line 33 Debugged program terminated. Use q to quit or R to restart, use o inhibit_exit to avoid stopping after program termination, h q, h R or h o to get additional info. Below, the code that crashes: package Something::File; use Moose; has 'type' =>(is=>'ro', isa=>'Str', writer=>'_set_type' ); sub is_slink { my $self = shift; return ( $self->type eq 'slink' ); } no Moose; __PACKAGE__->meta->make_immutable; 1; package Something::File::LocalFile; use Moose; use Moose::Util::TypeConstraints; extends 'Something::File'; subtype 'PositiveInt' => as 'Int' => where { $_ >0 } => message { 'Only positive greater than zero integers accepted' }; no Moose; __PACKAGE__->meta->make_immutable; 1; my $a = Something::File::LocalFile->new; # $a->_set_type('slink'); print $a->is_slink ." end\n";

    Read the article

  • Late Binding with LinFu & Oracle.DataAccess

    - by Alexander Stuckenholz
    Hello everybody, is there a good example of the late binding features of the LinFu framework? I developed a framework which is hard linked against a certain version of an Oracle client. Now I would like to be able to configure the version of the client without the need to rebuild the app. How can I do that with LinFu? Regards, Alex

    Read the article

  • Can't update contact details in android using code

    - by masterkapu
    I'm trying to update/change contact ringtone using this code: ContentValues values = new ContentValues(); values.put(ContactsContract.Data.CUSTOM_RINGTONE, "D:/TempDownloads/BurpSounds/Alex.wav"); getContentResolver().update(ContactsContract.Contacts.CONTENT_URI, values , "DISPLAY_NAME = 'Ani'", null); I get the message: " the application has stopped unexpectedly" what is wrong with my code and how do I do it? thanks

    Read the article

  • Separate groups of people based on members

    - by tevch
    I have groups of people. I need to move groups with at least one same member as far as possible from each other. Example: GroupA - John, Bob, Nick GroupB - Jack, Nick GroupC - Brian, Alex, Steve As you can see GroupA and GroupB overlap(they both contain Nick) I need an algorithm to set groups as GroupA-GroupC-GroupB Thank you

    Read the article

  • Floating a UILabel above OpenFlow

    - by Edward
    How do you get a UILabel to float above Alex Fajkowski's implementation of CoverFlow called OpenFlow? Ok I've figured it out. I just had to use bringSubviewToFront with the UILabel. Thanks to everybody who answered.

    Read the article

  • Generate dynamic xmlns

    - by alexbf
    Hello, I would like to dynamically generate xmlns attributes. I want to generate this in XSL : <Common:MainPageBase xmlns:Common="clr-namespace:ThisPartIsDynamic;assembly=ThisPartIsDynamic" </Common:MainPageBase How can I do that in XSL? Thanks, Alex

    Read the article

  • LotusNotes as a datasource for a .net C# site

    - by AlexFreitas
    Is it possible to have a connection to LotusNotes and use it as a data source for a C# project? We use LN for email/calendar :(... and management wants a web page that would interact with the calendar. I think this can all be done within Notes, but I would much rather do it in .net. They also want some very specific functionality, some of which I'm not really sure can even be done in Notes. Cheers, -Alex

    Read the article

  • Trying to test for OS and space in filesystem on AIX

    - by Buzkie
    I need to check if I Filesystem exists, and if it does exist there is 300 MB of space in it. What I have so far: if [ "$(df -m /opt/IBM | grep -vE '^Filesystem' | awk '{print ($3)}')" < "300" ] then echo "not enough space in the target filesystem" exit 1 fi This throws an error. I don't really know what I'm doing in shell. Please help. -Alex

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >